2024년 2월 21일 수요일

PowerVC 및 VIOS 환경에서 유용한 IBM 링크

 1. Dual VIOS 환경에서 좀 더 높은 가용성을 위한 설정

    - https://www.ibm.com/support/pages/multipathing-and-disk-resiliency-vscsi-dual-vios-configuration

2. 이중 VIOS 환경에서 1개의 VIOS 장애 시 LPM 방안
    - https://www.ibm.com/support/pages/new-hmc-firmware-840-feature-allowinactivesourcestoragevios-allows-lpm-dual-vios-configuration-when-one-vios-failed
    - https://supportcontent.ibm.com/support/pages/hsclb937-during-live-partition-mobility-migration-while-one-vios-down

3. NPIV 환경에서 LPM 에러
   - https://www.ibm.com/support/pages/hscla319-during-lpm-validation-aix-npiv-client

4. PowerVC 데이터 수집
   - https://www.ibm.com/support/pages/mustgather-powervc-data-collection

5. CMDB reset 방법
   - https://community.ibm.com/community/user/power/discussion/vios-cmdb-cleanup

6. 제품 버전 별 가이드
   - PVC 2.0.0 : https://www.ibm.com/docs/en/powervc/2.0.0
   - PVC 2.0.1  : https://www.ibm.com/docs/en/powervc/2.0.1
   - PVC 2.0.2 : https://www.ibm.com/docs/en/powervc/2.0.2
   - PVC 2.0.3  : https://www.ibm.com/docs/en/powervc/2.0.3
   - PVC 2.1.0 : https://www.ibm.com/docs/en/powervc/2.1.0
   - PVC 2.1.1  : https://www.ibm.com/docs/en/powervc/2.1.1
   - PVC 2.2.0 : https://www.ibm.com/docs/en/powervc-cloud/2.2.0

가상화 환경에서 packet drop이 많이 발생할 때 확인 및 조치 방안


IBM Power 시스템 환경에서 가상화 및 Private Cloud를 구성할 경우 가장 많은 사례가 있는 구성은 SEA(Shared Ethernet Adapter)를 이용한 가상 Ethernet 방식을 선호합니다.

해당 방법은 상대적으로 간단한 방법으로 구성 가능하며, IEEE 802.1q를 지원으로 VLAN Tagging 및 VIOS 간의 network 부하를 분산할 수 있는 Load balance 모드를 지원하고 있습니다.

아래 그림은 1개의 VIOS 환경에서 어떻게 SEA를 통해 가상 네트워크 서비스를 제공하는지를 논리적인 그림으로 나타난 것으로 보시는 것과 같이 VM에 서비스 가상 이더넷 어댑터는 Hypervisor를 통해 VIOS에 구성된 가상 이더넷 어댑터(서버 역할)로 전달되며 이렇게 전달된 네크워크 데이터는 SEA를 통해 실제 네트워크 어댑터를 통해 네트워크 스위치로 통신하게 됩니다.

개별 VM에서 처리하는 패키이 많지 않을 경우에는 문제가 되지 않지만, 네트워크 부하가 증가되면 VIOS에 설정된 서버 역할의 가상 이더넷 어댑터에 부하가 집중될 수 밖에 없기에 이러한 문제를 적절하게 관리하지 않는다면 packet dropped 또는 심각할 경우 서비스 이슈가 발생할 수 밖에 없습니다.


Packet  Failure 발생 여부 확인은 다음 명령어를 이용하여 확인 가능합니다.

# entstat -d entX
또는
# netstat -v

Hypervisor Send Failure : 79XXX
Receiver Failures : 79XXX
Send Errors: 0
Hypervisor Receive Failures : 0
Invalid VLAN ID Packets: 0

Packet 처리 이슈가 발생할 수 있는 케이스는 아래와 같은 경우에 발생할 수 있는데
   1. VIOS 리소스 부족 
   2. VIOS 가상 어댑터 버퍼 부족
   3. VLAN 및 Mac 어드레스를 포함한 네트워크 이슈
   4. 기타

1번의 경우 자원 부족이 발생할 경우이기에 추가적인 자원을 할당하거나 네트워크 사용량이 높은 VM을 상대적으로 사용량이 적은 서버로 LPM(Live Partition Mobility)을 수행하여 전체 부하를 줄일 수 있습니다.
3번의 경우에는 네트워크 구성 전체에 대한 구성 점검이 필요하며 해당 점검은 네트워크 팀과 같이 수행해야 합니다.

이와 별개로 VIOS 현 구성 내에서 처리할 수 있는 부분은 VIOS 가상 어댑터(서버 역할)에 설정된 버퍼 값을 튜닝하는 것으로 POWER9 & POWER10 서버에서 Best practice로 권고하는 값은 아래와 같습니다.

[VIOS에서 설정할 수 있는 버퍼 권고값]
 # chdev -l entX -a min_buf_tiny=4096 -a max_buf_tiny=4096 -P 
 # chdev -l entX -a min_buf_small=4096 -a max_buf_small=4096 -P
 # chdev -l entX -a min_buf_medium=2048 -a max_buf_medium=2048 -P  
 # chdev -l entX -a min_buf_large=256 -a max_buf_large=256 -P   
 # chdev -l entX -a min_buf_huge=64 -a max_buf_huge=64 -P  

현재 사용 중인 어댑터의 버퍼 속성값을 변경해야 하기에 온라인 중 변경할 수 없으며 추후 VIOS 리부팅 시에 적용할 수 있도록 -P 옵션을 추가할 필요가 있습니다.

튜닝 이후에는 netstat 또는 entstat 명령어를 이용하여 packet failure 값을 확인할 필요가 있으며 해당 값은 리부팅 이후 계속 축적되는 값이기에 모니터링 전 기준 값 확인이 반드시 필요합니다.

좀 더 자세한 내용은 아래 URL을 참조하시기 바랍니다.
   
https://www.ibm.com/support/pages/causes-hypervisor-send-and-receive-failures

2023년 6월 11일 일요일

PowerVC 환경에서 LPM 수행 시, VLAN이 삭제되는 경우

서비스 가용성을 높이기 위해 하드웨어의 의존성을 최소로 구성해야하는 클라우드 환경에서는 서버가 VM의 이동이 발생할 수 밖에 없습니다. 

IBM Power System으로 구현된 Private Cloud 환경에서도 사용자의 요구에 따라 온라인 VM 이동 솔루션인 LPM(LivePartition Mobility)를 자주 사용할 수 있는데, 이러한 LPM은 HMC를 통해 수행할 수 도 있고 클라우드 스택 제품인 PowerVC를 통해 수행할 수 있습니다. 

 아래 내용은 PowerVC가 설치된 환경에서 발생할 수 있는 이슈로  발생할 수 있는 문제는 다음과 같습니다. 
- 서버 1에서 서버 2로 VM에 대한 LPM 중, 타켓 서버에서 특정 VLAN이 삭제되어 서비스에 문제가 있는 현상

이와 같은 문제는 자주 사용되지 않은 VLAN에 대해 PowerVC가 특정 서버에서 해당 VLAN을 사용하는 마지막 VM을 이동하거나 삭제한 경우 VLAN 정보를 삭제하는 것으로 PowerVC nova의 기본 설정 값 중 "autoated_powervm_vlan_cleanup" 기본 설정값이 True로 설정되어 있어 발생할 수 있습니다. 

"autoated_powervm_vlan_cleanup" 설정값은 아래와 같이 확인 및 수정은 다음과 같이 수행할 수 있습니다. 

1. PowerVC 로그인 
2. cd /etc/nova 
[root@powervc~]# cd /etc/nova 
[root@powervc nova]# ls -al
합계 496
drwxr-x---.   4 nova nova  4096  6월 12 11:02 .
drwxr-xr-x. 166 root root 12288 12월 16 18:23 ..
-rw-r-----.   1 root nova  3957  6월  3  2020 api-paste.ini
-rw-r-----.   1 root nova  2053  6월  3  2020 api_audit_map.conf
-rw-r-----.   1 nova nova   720 12월  7  2020 flavors-config-powerkvm.json
-rw-r-----.   1 nova nova  2606 12월  7  2020 flavors-config-powervm.json
-rw-r-----.   1 nova nova   397 12월  4  2019 hapolicy_conf.xml
-rw-r-----.   1 nova nova 70667  8월  5  2021 nova-824722L_841F58A.conf
-rw-r-----.   1 nova nova 70704 12월  9  2020 nova-828422A_21406CV.conf
-rw-r-----.   1 nova nova 70667 12월  9  2020 nova-828642A_84D7B4V.conf
-rw-r-----.   1 root nova   574  9월 15  2020 nova-health.conf
-rw-r-----.   1 nova nova 70090  8월  5  2021 nova.conf
-rw-r-----.   1 nova nova 59791  9월  7  2020 nova.conf.baseline
-rw-r-----.   1 root root 70090  6월 12 10:59 nova.lth
-rw-r-----.   1 root nova     4  9월  7  2020 policy.json
drwxr-xr-x.   2 root root  4096  9월 20  2020 powervc-health-policy
-rw-r-----.   1 nova nova    47  6월 12 11:18 prs_compute_node_status.json
-rw-r--r--.   1 root root    79  9월  7  2020 release
-rw-r-----.   1 root nova   966  9월  7  2020 rootwrap.conf
drw-r-----.   2 root nova     6  9월  7  2020 rootwrap.d
-rw-r-----.   1 nova nova  1952  2월 12  2020 rtpolicy_conf.xml
-rw-r-----.   1 nova nova  2297 12월 13  2020 rtpolicy_conf_powerkvm.xml.pvc
-rw-r-----.   1 nova nova  1952 12월 13  2020 rtpolicy_conf_powervm.xml.pvc

 3. nova 디렉토리에 있는 파일 중 nova.conf 및 nova-TYPEMODEL_SERIALNO.conf 등의 이름을 가진 모든 파일을 확인 
[root@powervc nova]# ls -al nova*.conf 
-rw-r-----. 1 nova nova 70667 8월 5 2021 nova-824722L_841F58A.conf 
-rw-r-----. 1 nova nova 70704 12월 9 2020 nova-828422A_21406CV.conf 
-rw-r-----. 1 nova nova 70667 12월 9 2020 nova-828642A_84D7B4V.conf 
-rw-r-----. 1 root nova 574 9월 15 2020 nova-health.conf 
-rw-r-----. 1 nova nova 70090 8월 5 2021 nova.conf 

4. 각 파일에 있는 "autoated_powervm_vlan_cleanup" 값 확인 
[root@powervc nova]# grep automated_powervm_vlan_cleanup nova*.conf 
nova-824722L_841F58A.conf:automated_powervm_vlan_cleanup = True 
nova-828422A_21406CV.conf:automated_powervm_vlan_cleanup = True 
nova-828642A_84D7B4V.conf:automated_powervm_vlan_cleanup = True 
nova.conf:automated_powervm_vlan_cleanup = True 

5. 각 파일의 "autoated_powervm_vlan_cleanup"설정값 수정 및 확인automated_powervm_vlan_cleanup = True --> automated_powervm_vlan_cleanup = False

[root@powervc nova]# grep automated_powervm_vlan_cleanup nova*.conf 
nova-824722L_841F58A.conf:automated_powervm_vlan_cleanup = False 
nova-828422A_21406CV.conf:automated_powervm_vlan_cleanup = False 
nova-828642A_84D7B4V.conf:automated_powervm_vlan_cleanup = False 
nova.conf:automated_powervm_vlan_cleanup = False 

6. PowerVC 서비스 재기동 
해당 설정은 특히 일부 VM들이 PowerVC 뿐만 아니라 사용자에 의해 별도 관리될 때 PowerVC에서 원치않은 SEA 설정 값 변경을 초래할 수 있기에 해당 값을 False로 설정하여 관리되어야 합니다. 

[root@powervc nova]# /opt/ibm/powervc/bin/powervc-services stop 
panko 서비스 중지 중... 
swift 서비스 중지 중... 
gnocchi 서비스 중지 중... 
validator 서비스 중지 중... 
clerk 서비스 중지 중... 
bumblebee 서비스 중지 중... 
health 서비스 중지 중... 
ceilometer 서비스 중지 중... 
nova 서비스 중지 중... 
neutron 서비스 중지 중... 
ego 서비스 중지 중... 
Shut down LIM on ...... done 
cinder 서비스 중지 중... 
glance 서비스 중지 중... 
rabbitmq 서비스 중지 중... 
httpd 서비스 중지 중... 
db 서비스 중지 중... 

[root@powervc nova]# /opt/ibm/powervc/bin/powervc-services start 
db 서비스 시작 중... 
httpd 서비스 시작 중... 
rabbitmq 서비스 시작 중... 
glance 서비스 시작 중... 
cinder 서비스 시작 중... 
ego 서비스 시작 중... 
neutron 서비스 시작 중... 
nova 서비스 시작 중... 
ceilometer 서비스 시작 중... 
health 서비스 시작 중... 
bumblebee 서비스 시작 중... 
clerk 서비스 시작 중... 
validator 서비스 시작 중... 
gnocchi 서비스 시작 중... 
swift 서비스 시작 중... 
panko 서비스 시작 중... 

[root@powervc nova]# /opt/ibm/powervc/bin/powervc-services status 
● panko-api.service - OpenStack Panko API Server Active: active (running) since 월 2023-06-12 11:12:48 KST; 2min 17s ago 
● openstack-swift-account.service - OpenStack Object Storage (swift) - Account Server Active: active (running) since 월 2023-06-12 11:12:47 KST; 2min 18s ago 
● openstack-swift-object.service - OpenStack Object Storage (swift) - Object Server Active: active (running) since 월 2023-06-12 11:12:48 KST; 2min 18s ago 
....... 
● httpd.service - The Apache HTTP Server Active: active (running) since 월 2023-06-12 11:12:41 KST; 2min 26s ago 
● memcached.service - Memcached Active: active (running) since 월 2023-06-12 11:12:41 KST; 2min 26s ago 
● mariadb.service - MariaDB database server Active: active (running) since 월 2023-06-12 11:12:41 KST; 2min 26s ago

2021년 7월 4일 일요일

PowerHA 정보 완전 삭제 방법

작업을 하다보면 기존 PowerHA의 클러스터 정보를 완전히 삭제되었는지 확신할 수 없을 때가 있습니다. 해서 아래와 같은 방법으로 PowerHA에 생성한 클러스터를 완전히 삭제할 수 있습니다. 

 1) node *1 
# export CAA_FORCE_ENABLED=1
# rmcluster -f -r hdiskX -v 
# rmdev -dl cluster0 
# odmdelete -q name=cluster0 -o CuAt 
# odmdelete -o HACMPsircol 


 2) node *2 
# export CAA_FORCE_ENABLED=1 
# clusterconf -r hdiskX 
# rmdev -dl cluster0 
# odmdelete -q name=cluster0 -o CuAt 
# odmdelete -o HACMPsircol 

 3) reboot both nodes 

 4) node *1 
# mkvg -f -y scrubvg hdiskX 
# varyoffvg scrubvg # exportvg scrubvg 

 5) node *2 
# importvg -y scrubvg 
# varyoffvg scrubvg 
# exportvg scrubvg 

 이후, CAA repository disk 등록 후 cluster verificatio & sync 수행하여 caavg_private VG 가 정상적으로 만들어지고 CAA 가 살아났는지 확인 바랍니다. 위와 같은 방법으로 클러스터 삭제이후 재생성을 수행하면 대부분이 경우 정상적으로 PowerHA를 생성할 수 있겠지만, 혹시 CAA 관련 생성 이슈가 여전히 발생하면 다음과 같은 작업을 고려해 볼 수 있습니다. 

 1) /etc/ 디바이스 확인 
     - 기존 환경에서 caavg_priavate은 삭제되어 있지만, caa관련 lv가 garbage로 남아있ㅇ 발생하는 문제.

2) 수작업으로 CAA 생성 
node 1: 
$ /usr/sbin/mkvg -f -y caavg_private -s 64 hdisk1 
$ /usr/sbin/mklv -y caalv_private1 -t boot caavg_private 1 hdisk1
$ /usr/sbin/mklv -y caalv_private2 -t boot caavg_private 1 hdisk1 
$ /usr/sbin/mklv -y caalv_private3 -t boot caavg_private 4 hdisk1 
$ /usr/bin/dd if=/dev/zero of=/dev/caalv_private3 bs=1024 count=100 
$ /usr/sbin/mklv -y powerha_crlv -t boot caavg_private 1 hdisk1 node2: 
$ /usr/sbin/importvg -y caavg_private -O hdisk1 
$ /usr/sbin/varyonvg -b -u -O caavg_private

2018년 12월 26일 수요일

GPFS 주요 튜닝 파라미터

Tuning Parameters GPFS

▼ This section describes some of the configuration parameters available in GPFS. Included are some notes on how they may affect performance. 
These are GPFS configuration parameters that can be set cluster wide, on a specific node or sets of nodes. To view the configuration parameters that has been changed from the default mmlsconfig To view the active value of any of these parameters you can run (v 3.4 and later) mmdiag --config To change any of these parameters use mmchconfig. 

For example to change the pagepool setting on all nodes. 
 mmchconfig pagepool=256M 
Some options take effect immediately using the -i or -I flag to mmchconfig, some take effect after the node is restarted. Use -i to make the change permanent and affect the running GPFS daemon immediately. Use -I to affect the GPFS daemon only (reverts to saved settings on restart). Refer to the GPFS Documentation for details. In addition some parameters have a section called Tuning Guidelines. 

These are general guidelines that can be used to determine a starting point for tuning a parameter.
  
 GPFSCmdPortRange 
 leaseRecoveryWait 
logfile maxBufferDescs 
maxFilesToCache 
maxMBpS 
maxMissedPingTimeout 
maxReceiverThreads 
maxStatCache 
minMissedPingTimeout 
nfsPrefetchStrategy 
nsdBufSpace 
nsdInlineWriteMax 
nsdMaxWorkerThreads 
nsdMultiQueue 
nsdSmallBufferSize 
nsdSmallThreadRatio 
nsdThreadMethod 
nsdThreadsPerQueue 
numaMemoryInterleave 
opensslLibName 
pagepool 
prefetchPct 
prefetchThreads 
privateSubnetOverride 
readReplicaPolicy 
scatterBuffers 
scatterBufferSize 
seqDiscardThreshold 
sharedMemLimit 
socketMaxListenConnections 
socketRcvBufferSize 
socketSndBufferSize 
statCacheLimit tokenMemLimit 
verbsLibName 
verbsRdmaQpRtrSl 
verbsrdmasperconnection 
verbsrdmaspernode 
worker1Threads 
worker3Threads 
writebehindThreshold 
ignorePrefetchLUNCount    
leaseRecoveryWait  

The leaseRecoveryWait parameter defines how long the FS manager of a filesystem will wait after the last known lease expiration of any failed nodes before running recovery. 

A failed node cannot reconnect to the cluster before recovery is finished. 
The leaseRecoveryWait parameter value is in seconds and the default is 35. Making this value smaller increases the risk that there may be IO in flight from the failing node to the disk/controller when recovery starts running. 

This may result in out of order IOs between the FS manager and the dying node. 

 In most cases where a node is expelled from the cluster there is a either a problem with the network or the node running out of resources like paging. For example, if there is an application running on a node paging the machine to death or overrunning network capacity, GPFS may not have a chance to contact the Cluster Manager node to renew its lease within the timeout period.     

GPFSCmdPortRange  
When GPFS administration commands are executed they may use one or more TCP/IP ports to complete the command. For example when using standard ssh an admin command opens a connection using port 22. In addition to the remote shell or file copy command ports there are additional ports that are opened to pass data to and from remote GPFS daemons. By default a GPFS command uses one of the ephemeral ports and the remote node handling the command (typically the Cluster Manager node or one of the File System Manager nodes) to connect back to the node originating the command. In some environments you may want to limit the range of ports used by GPFS administration commands. You can control the ports used by the remote shell and file copy commands by using different tools or configuring these tools to use different ports. The ports used by the GPFS daemon for administrative command execiution can be defined using the GPFS configuration parameter GPFSCmdPortRange. 
 mmchconfig GPFSCmdPortRange=lowport-highport 

 This allows you to limit the ports used for GPFS administration mm* command execution. You need enough ports to support several concurrent commands from a single node, so you should define 20 or more ports for this purpose. 
Example: mmchconfig GPFSCmdPortRange=30000-30100     Logfile  "Logfile" size should be larger for high metadata rate systems to prevent more glitches when the log has to wrap. Can be as large as 16MB on large blocksize file systems. To set this parameter use the --L flag on mmcrfs.  

minMissedPingTimeout  
The minMissedPingTimeout and maxMissedPingTimeout parameters set limits on the calculation of missedPingTimeout (MPT) which is the allowable time for pings to fail from the Cluster Manager (CM) to a node that has not renewed its lease. The default MPT is leaseRecoveryWait minus 5 seconds. The CM will wait MPT seconds after the lease has expired before declaring a node out of the cluster. The minMissedPingTimeout and maxMissedPingTimeout parameters value is in seconds and the defaults are 3 and 60 respectively. If these values are changed, only GPFS on the quorum nodes (from which the CM is elected) need to be recycled to take effect. This can be used to cover over something like a central network switch failure timeout (or other network glitches) that may be longer than leaseRecoveryWait. It may prevent false node down conditions but will extend the time for node recovery to finish which may block other nodes making progress if the failing node held tokens for many shared files. Just as in the case of leaseRecoveryWait, in most cases where a node is expelled from the cluster there is a either a problem with the network or the node running out of resources like paging. For example, if there is an application running on a node paging the machine to death or overrunning network capacity, GPFS may not have a chance to contact the Cluster Manager node to renew its lease within the timeout period.  

maxMissedPingTimeout  
See:minMissedPingTimeout   

maxReceiverThreads  
The maxReceiverThreads parameter is the number of threads used to handle incoming TCP packets. These threads gather the packets until there are enough bytes for the incoming RPC (or RPC reply) to be handled. For some simple RPCs, the receiver thread handles he message immediately, otherwise it hands it off some handler threads. maxReceiverThreads defaults to the number of CPUs in the node up to 16. It can be configured higher if necessary up to 128 for very large clusters.     

pagepool  
The Pagepool parameter determines the size of the GPFS file data block cache. Unlike local file systems that use the operating system page cache to cache file data, GPFS allocates its own cache called the pagepool. The GPFS pagepool is used to cache user file data and file system metadata. The old default pagepool size of 64MB is too small for many applications so this is a good place to start looking for performance improvement. In release 3.5, the default is 1GB for new installs. When upgrading it keeps the old setting. Along with file data, the pagepool supplies memory for various types of buffers like prefetch and write behind For Sequential IO The default pagepool size may be sufficient for sequential IO workloads, however, a recommended value of 256MB is known to work well in many cases. To change the pagepool size, use the mmchconfig command. 
For example, to change the pagepool size to 2GB on all nodes in the cluster, execute the mmchconfig command: mmchconfig pagepool=2G [-i] If the file system blocksize is larger than the default (256K), the pagepool size should be scaled accordingly to allow the same number of buffers to be cached. Random IO The default pagepool size will likely not be sufficient for Random IO or workloads involving a large number of small files. In some cases allocating 4GB, 8GB or more memory can improve workload performance. Random Direct IO For database applications that use Direct IO, the pagepool is not used for any user data. It's main purpose in this case is for system metadata and caching the indirect blocks for the files. NSD servers Assuming no applications or Filesystem Manager services are running on traditional NSD servers (not GNR or GSS servers), the pagepool is only used transiently by the NSD worker threads to gather data from client nodes and write the data to disk. The NSD server does not cache any of the data. Each NSD worker just needs one pagepool buffer per operation, and the buffer can be potentially as large as the largest filesystem blocksize that the disks belong to. 

With the default NSD configuration, there will be 3 NSD worker threads per LUN (nsdThreadsPerDisk - pre GPFS 3.5) or per queue (GPFS 3.5 and later)  that the node services. So the amount of memory needed in the pagepool will be 3*#LUNS*maxBlockSize. The target amount of space in the pagepool for NSD workers is controlled by nsdBufSpace which defaults to 30%. So the pagepool should be large enough so that 30% of it has enough buffers. 32 Bit operating systems On 32-bit operating systems pagepool is limited by the GPFS daemons address space. This means that it cannot exceed 4GB in size and is often much smaller due to other limitations.       

opensslLibName  
To initialize multi-cluster communiations GPFS uses openssl. When initializng openssl GPFS looks for these ssl libraries: libssl.so:libssl.so.0:libssl.so.4 (as of GPFS 3.4.0.4). If you are using a newer version of openssl the filename may not match one in the list (exmaple libssl.so.6). You can use the opensslLibName parameter to tell GPFS to look for the newer version instead. mmchconfig opensslLibName="libssl.so.6"     readReplicaPolicy  Options: 
default, local Default By default when data is replicated GPFS spreads the reads over all of the available failure groups. This configuration typically best when the nodes running GPFS have equal access to both copies of the data. Local A value of local has two effects on reading data in a replicated storage pool. Data is read from: 1. A local block device 2. A "local" NSD Server The local block device means that the path to the disk is through a block special device on Linux, for example that would be a /dev/sd* or on AIX a /dev/hdisk device. GPFS does not do any further determination, so if disks at two sites are connected with a long distance fiber connection GPFS cannot distinguish what is local. So to use this option connect the sites using the NSD protocol over TCP/IP or InfiniBand Verbs (Linux Only). Further GPFS uses the subnets configuration setting to determine what NSD servers are "local" to an NSD client. For NSD clients to benefit from "local" read access the NSD servers supporting the local disk need to be on the same subnet as the NSD clients accessing the data and that subnet needs to be defined using the "subnets" configuration parameter. This parameter is useful when GPFS replication is used to mirror data across sites and there are NSD clients in the cluster. This keeps read access requests from being sent over the WAN.       

scatterBuffers  
The scatterBuffer parameter affects how GPFS organizes file data in the pagepool.  The default is scatterBuffers=yes (Starting in GPFS 3.5). The scatterBuffers parameter was introduced in GPFS 3.5 as a method to better handle fragmented pagepool memory.  It behaves differently depending on the what operating system anddrivers you are using. It is best to test different setting of scatterBuffers and scatterBufferSize (See section scatterBufferSize) to see what works best for your application. Tuning Guidelines If you are on AIX and your workload is mostly sequential disable the scatterBuffers feature by setting scatterBuffers=no. If you are not observing full blocksize IO's being sent to the storage during sequential IO operations disabling scatterBuffers or increasing scatterBufferSize may help (See scatterBufferSize).   scatterBufferSize  The scatterBufferSize parameter sets the size of the scatter buffer use by GPFS.  The default is 32KiB (Starting in GPFS 3.5). Tuning Guidelines When tuning for sequential IO workloads it may help to increase scatterBufferSzie to be the same as the file system blocksize. If you are not observing full blocksize IO's being sent to the storage during sequential IO operations disabling scatterBuffers or increasing scatterBufferSize may help (See scatterBufferSize).   

seqDiscardThreshold  
The seqDiscardThreshold parameter affects what happens when GPFS detects a sequential read (or write) access pattern and has to decide what to do with the pagepool buffer after it is consumed (or flushed by writebehind threads). This is the highest performing option for the case where a very large file is read (or written) sequentially. The default for this value is 1MB which means that if you have a file that is sequentially read and is greater than 1MB GPFS does not keep the data in cache after consumption. There are some instances where large files are reread often by multiple processes; data analytics for example. In some cases you can improve the performance of these applications by increasing seqDiscardThreshold to be larger than the sets of files you would like to cache. Increasing seqDiscardthreshold tells GPFS to attempt to keep as much data in cache as possible for the files below that threshold. The value of seqDiscardThreshold is file size in bytes. The default is 1MB (1048576 bytes).Tuning Guidelines Increase this value if you want to cache most files that are sequentially read or written and are larger than 1MB in size. Make sure there are enough buffer descriptors to cache the file data. (See: maxBufferDescs )     

sharedMemLimit 
The sharedMemLimit parameter allows you to increase the amount of memory available to store various GPFS structures including inode cache and tokens. When the value of sharedMemLimit is set to 0 GPFS automatically determines a value for sharedMemLimit. The default value varies on each platform. In GPFS 3.4 the default on Linux and Windows is 256MB. In GPFS 3.4 on Windows sharedMemLimit can only be used to decrease the size of the shared segment. To determine whether or not increasing sharedMemLimit may help you can use the mmfsadm dump fs command. For example, if you run mmfsadm dump fs and see that you are not getting the desired levels of maxFilesToCache (aka fileCacheLimit) or maxStatCache (aka statCacheLimit) you can try increasing sharedMemLimit. # mmfsadm dump fs | head -8 Filesystem dump: FSP 0x18051D75AB0 UMALLOC limits: bufferDescLimit 40000 desired 40000 fileCacheLimit 4000 desired 4000 statCacheLimit 1000 desired 1000 diskAddrBuffLimit 200 desired 200 The sharedMemLimit parameter is set in bytes. As of release 3.4 the largest sharedMemLimit on Windows is 256M. On Linux and AIX the largest setting is 256G on 64 bit architectures and 2047M on 32 bit architectures. Using larger values may not work on some platforms/GPFS code versions. The actual sharedMemLimit on Linux may be reduced to a percentage of the kernel vmalloc space limit.     

socketMaxListenConnections  
The parameter socketMaxListenConnections sets the number of TCP/IP sockets that the daemon can listen on in parallel. This tunable was introduced in 3.4.0.7 specifically for large clusters, where an incast message to a manager node from a large number of client nodes may require multiple listen() calls and timeout. To be effective, the Linux tunable /proc/sys/net/core/somaxconn must also be modified from the default of 128. The effective value is the smaller of the GPFS tunable and the kernel tunable. Incoming connection requests may be silently dropped by the kernel networking component if the GPFS listen queue backlog is exceeded. When many nodes are TCP connecting to a node, a TCP connect may fail if the connection request is dropped too many times.  At this point the GPFS node calling connect sends an expel request. Parameter Values Versions prior to 3.4.0.7 are fixed at 128. The default remains 128 on Linux and 1024 on AIX. The Linux kernel tunable also defaults to 128. The minimum and maximum Value: 1 and 65536 Tuning Guidelines Set the value of socketMaxListenConnections greater than or equal to the number of nodes that will create a TCP connection to any one node. Tuning Guidelines For clusters under 500 nodes tuning this value should not be required. For larger clusters it should be set to approximately the number of nodes in the GPFS cluster. Example mmchconfig socketMaxListenConnections=1500 echo 1500 > /proc/sys/net/core/somaxconn (or) sysctl -w net.core.somaxconn=1500 AIX: The command no p -o somaxconn must also be used to increase the value of somaxconn to a value greater than or equal to the value of socketMaxListenConnections. Linux: The sysctl.conf file must be modified to increase the value of net.core.somaxconn to a value greater than or equal to socketMaxListenConnections.     

socketRcvBufferSize  
The parameter socketRcvBufferSize sets the size of the TCP/IP receive buffer used for NSD data communication. This parameter is in bytes.     socketSndBufferSize  The parameter socketSndBufferSize sets the size of the TCP/IP send buffer used for NSD data communication. This parameter is in bytes.     tokenMemLimit  The parameter tokenMemLimit sets the size of memory available for manager nodes to use for caching tokens. The default is to use one memory segment (different on each operating system). To allow nodes acting as token manager to cache more tokens increase the value of tokenMemLimit.  You only need to set this parameter on manager ndoes that may be doing token management. This parameter is in bytes. (See maxFilesToCache)     maxBufferDescs  The value of maxBufferDescs defaults 10 * maxFilesToCache up to pagepool size/16K. When caching small files, it actually does not need to be more than a small multiple of maxFilesToCache since only OpenFile objects (not stat cache objects) can cache data blocks.If an application needs to cache very large files you can tune maxBufferDescs to ensure there are enough to cache large files. To see the current value use the mmfsadm command: mmfsadm dump fs | head -8 [statistics never reset] Filesystem dump: FSP 0x18051D75AB0 UMALLOC limits: bufferDescLimit 40000 desired 40000 fileCacheLimit 4000 desired 4000 statCacheLimit 1000 desired 1000 diskAddrBuffLimit 200 desired 200 In this case there are 10,000 buffer descriptors configured. If you have a 1MiB file system blocksize and want to cache a 20GiB file, you will not have enough buffer descriptors. In this case to cache a 20GiB file increase maxBufferDescs to at least 20,480 (20GiB/1MiB=20,480). It is not exactly a one to one mapping so a value of 32k may be appropriate. mmchconfig maxBufferDescs=32k     

maxFilesToCache  
The maxFilesToCache (MFTC) parameter controls how many files each node can cache. Each file cached requires memory for the inode and a token(lock). In addition to this parameter, the maxStatCache (MSC) parameter controls how many files are partially cached. In GPFS 3.5 and earlier the default value of maxStatCache is 4 * maxFilesToCache in GPFS 4.1 it is now opposite with maxFilesToCache defualting to 4000 and maxStatCache to 1000.  The Token Managers (TM) for a cluster has to keep token state for all nodes in the cluster and from nodes in remote clusters that mount the file systems. A Token Manager uses roughly 400 bytes of memory to manage one token for one node. The amount of memory available for caching tokens on the each Token Manager node is controlled by the tokenMemLimit parameter which defaults to on memory segment, which varies per operating system. In a large cluster, a change in the value of maxFilesToCache is greatly magnified. Increasing maxFilesToCache from the default of 4000 by a factor of 2 in a cluster with 200 nodes increases the number of tokens a token manager needs to store by approximately 800,000. Therefore on large clusters it is recommended to only increase maxFilesToCache where needed. This is usually on a subset of nodes that are used as login nodes where multiple users are concurrently doing directory listings, for example. On these nodes you should increase the maxFilesToCache parameter to 60k to 100k. Nodes that may benefit from increasing maxFilesToCache include: login nodes, NFS/CIFS exporters, email servers or other file servers. For systems where applications use a large number of files, of any size, increasing the value for maxFilesToCache may prove beneficial. This is particularly true for systems where a large number of small files are accessed. The increased value should be large enough to handle the number of concurrently open files plus allow caching of recently used files. You can use mmpmon (See monitoring ) to measure the number of files opened and closed on a GPFS file system. Changing the value of maxFilesToCache effects the amount of memory used on the node as well. The amount of memory required for inodes and control data structures can be calculated as: maxFilesToCache × 3.5 KB where 3.5 KB = 3 KB + 512 bytes for an inode. If you have larger inodes, the size gets larger. Valid values of maxFilesToCache range from 1 to 100,000,000. In some rare cases there are other additional consumers of this memory space including byte range locks. This means that you may not always have the full segment of memory to use. If you need additional memory space you can increase the amount of memory for inode caching by increasing the value of sharedMemLimit. Note: prior to release 3.5 the default maxFilesToCache and maxStatCache were 1000 and 4000. As of release 3.5, the default values are 4000 and 1000. If you change the maxFilesToCache value but not the maxStatCache value, then maxStatCache defaults to 4 * maxFilesToCache. Tuning Guidelines: The increased value should be large enough to handle the number of concurrently open files plus allow caching of recently used files. Increasing maxFilesToCache can improve the performance of user interactive operations like running ls. Don't increase the value of maxFilesToCache on all nodes in a large cluster without ensuring you have sufficient token manager memory to support the possible number of outstanding tokens.     maxMBpS  The maxMBpS option is an indicator of the maximum throughput in megabytes that can be submitted by GPFS per second into or out of a single node. It is not a hard limit, but rather the maxMBpS value is a hint GPFS uses to calculate how many prefetch/writebehind threads should be scheduled (up to the prefetchThreads setting) for sequential file access. In GPFS 3.3, the default maxMBpS value is 150, and in GPFS 3.5 it defaults to 2048. The maximum value is 100,000. The maxMBpS value should be adjusted for the nodes to match the IO throughput the node is expected to support. For example, you should adjust maxMBpS for nodes that are directly attached to storage. A good rule of thumb is to set maxMBpS to twice the IO throughput required of a system. For example, if a system has two 4Gbit HBA's (400MB/sec per HBA) maxMBpS should be set to 1600. If the maxMBpS value is set too low sequential IO performance may be reduced. This setting is not used by NSD servers. It is only used for application nodes doing sequential access to files.     maxStatCache  The maxStatCache parameter sets aside pageable memory to cache attributes of files that are not currently in the regular file cache. This can be useful to improve the performance of stat() calls for applications with a working set that does not fit in the regular file cache. The memory occupied by the stat cache can be calculated as: maxStatCache × 176 bytes. Valid values of maxStatCache range from 0 to 10,000,000. For systems where applications test the existence of files, or the properties of files, without actually opening them (as backup applications do), increasing the value for maxStatCache may prove beneficial. The default value is 1,000. On system where maxFilesToCache is greatly increased it is recommended that this value be manually set to something less than 4 * maxFilesToCache. For example if you set maxFilesToCache to 30,000 you may want to set maxStatCache to 30,000 as well. On compute nodes, this can usually be set much lower since they only have a few active files in use for any one job anyway. The way Linux handles inodes makes maxStatCache  generally ineffective. So on Linux systems leave maxStatCache at the default of 1000 and modify maxFilesToCache as needed. Note: Prior to release 3.5 the default maxFilesToCache and maxStatCache were 1000 and 4000. The size of the GPFS shared segment can limit the maximum setting of maxStatCache.     

 nfsPrefetchStrategy  
The parameter nfsPrefetchStrategy tells GPFS to optimize prefetching for NFS file style access patterns. It defines a window of the number of blocks around the current position that are treated as "fuzzy sequential" access. This can improve performance when reading big files sequentially, but because of kernel scheduling, some of the read requests come to GPFS out of order and therefore do not look "strictly sequential". If the filesystem blocksize is small relative to the read request sizes, making this bigger will provide a bigger window of blocks. The default is 0 . Tuning Guidelines Setting nfsPrefetchStrategy to 1 can improve sequential read performance when large files are accessed using NFS and the filesystem block size is small relative to the NFS transfer block size.     

nsdBufSpace  
The parameter nsdBufSpace specifies the percent of pagepool which can be utilized for NSD IO buffers. In GPFS 3.5, nsdBufSpace places an indirect maximum limit on the number of NSD threads at startup time, by limiting the available space for the buffers dedicated to NSD threads. In GPFS 3.4, nsdBufSpace was more of a dynamic limit as threads used buffers. In GPFS 3.5 nsdBufSpace is a limit imposed when the queues and threads are laid out at server startup time. 

 nsdInlineWriteMax  
The nsdInlineWriteMax parameter specifies the maximum transaction size which can be sent as embedded data in a NSD write RPC. In most cases the NSD write RPC exchange uses two steps: 1. An initial RPC from client to server requesting a write, and describing it, so the server can prepare to receive it 2. A GetData RPC back from the server to the client, requesting the data. For data lsmaller than nsdInlineWriteMax GPFS sends that amount of write data directly, to avoid step 2. Note that it may be a good idea to increase this value when, for example, the configuration is using 4k inode size or the workload consists of many small writes. The default value in GPFS 3.5 is 1KiB.   

nsdMaxWorkerThreads  
The parameter nsdMaxWorkerThreads sets the maximum number of NSD threads on an NSD server that will be concurrently transferring data with NSD clients.The maximum value depends on the sum of worker1Threads + prefetchThreads + nsdMaxWorkerThreads < 8192 on 64bit architectures. The default is 64 (in 3.4) 512 (in 3.5) with a minimum of 8 and maximum of 8,192. This default works well in many clusters. In some cases it may help to increase nsdMaxWorkerThreads for large clusters. Scale this with the number of LUNs, not the number of clients. You need this to manage flow control on the network between the clients and the servers.     

nsdMultiQueue  
The parameter nsdMultiQueue sets the maximum number of queues (small + large). The default is 256.     nsdSmallBufferSize  The parameter nsdSmallBufferSize specifies the largest IO request size that is considered "small" and thus placed in a "small" IO queue. IO requests larger than this value are sent to a large IO queue. The default value is 65536. This may need to be changed for different workloads. If for example the maxBlockSize is small (64k etc) it may help to set nsdSmallBufferSize lower  (perhaps 16KB). In most cases the default works well.   

nsdSmallTheradRatio (New in GPFS 3.5)   
The parameter nsdSmallThreadRatio determines the ratio of NSD server queues for small IO's (default les sthan 64KiB) to the number of NSD server quques that handle large IO's (> 64KiB). The default is to have more small queues than large queues. This may work well when there are a high number of small file or metadata IO operations, though on clusters with a high percentage of large IO operations there are often not enough large queues and threads to keep the storage busy. In these cases you need to modify these parameters to provide more IO processing capability. See: NSD Server Tuning for more details        

nsdThreadMethod  
The parameter nsdSmallBufferSize controls the heuristic used to determine queue allocations. In earlier versions of 3.5, this was set to zero. The related heuristic was not very effective, especially when dealing with clusters that have been upgraded from 3.4. An improved heuristic (related to setting nsdThreadMethod = 1) has been the default in later versions of 3.5. The default value in later versions of GPFS 3.5 is 1, prior to that it was 0. This parameter should be set to 1 in GPFS 3.5.     

nsdThreadsPerQueue (New in GPFS 3.5)   
The parameter nsdThreadsPerQueue determines the number of threads assigned to process each NSD server IO queue. This value is aplpied for small aIO and large IO queues (See nsdSmallthreadRatio fo discussion of IO queues). See: NSD Server Tuning for more details   numaMemoryInterleave    On Linux, setting numaMemoryInterleave to yes starts mmfsd with numactl --interleave=all. Enabling this parameter may improve the performance of GPFS running on NUMA based systems, for example if the system is based on a Intel Nehalem processor. For this parameter to work you need to have the Linux numactl utility installed.     prefetchPct   "prefetchPct" defaults to 20% of pagepool. GPFS uses this as a guideline which limits how much pagepool space will be used for prefetch or writebehind buffers in the case of active sequential streams. The default works well for many applications. On the other hand, if the workload is mostly sequential (video serving/ingest) with very little caching of small files or random IO, then this number should be increased up to its 60% maximum, so that each stream can have more buffers cached for prefetch and write behind operations.     prefetchThreads  To see how many prefetchThreads are in use use the mmfsadm command: mmfsadm dump fs | egrep "nPrefetchThreads:|total wait" Tuning Guidelines: You usually don't need prefetchThreads to be more than twice the number of LUNs available to the node (see ignorePrefetchLUNCount). Any more than that typically do nothing but wait in queues. The maximum value depends on the sum of worker1Threads + prefetchThreads + nsdMaxWorkerThreads < 8192 on 64bit architectures     

privateSubnetOverride 
 The privateSubnetOverride parameter tells GPFS to allow the use of multiple networks or communication between multiple clusters when using multiple networks in a GPFS cluster the primary cluster IP address (the address displayed when running the mmlscluster command) should not be a private IP address. A private TCP/IP address is defined in RFC 1597 as: 10.0.0.0 - 10.255.255.255 172.16.0.0 - 172.31.255.255 192.168.0.0 - 192.168.255.255 By default you cannot use multiple TCP/IP interfaces in a cluster or mount a file system accross clusters if the daemonnodename is a private IP Address. If you need to use private IP address with multiple interfaces or when using multi-cluster you can tell GPFS to allow a mount to another private subnet by setting the privateSubnetOverride parameter. Setting privateSubnetOverride to 1 instructs GPFS to allow the use of multiple private subnets. The default for privateSubnetOverride is 0.     verbsLibName  To initialize IB RDMA GPFS looks for a file called libverbs.so. If that file name is different on your system libverbs.so.1.0 , for example, you can change this parameter to match. Example mmchconfig verbsLibName=libverbs.so.1.0       verbsRdmaQpRtrSl  Use verbsRdmaQpRtrSl to set the infiniband quality of service level for GPFS communication. This value needs to match the quality of service level defined for GPFS in your InfiniBand subnet manager. Example If you define a service level of 2 for GPFS in the InfiniBand subnet manager set verbsRdmaQpRtrSl to 2. mmchconfig verbsRdmaQpRtrSl=2         verbsrdmasperconnection  This is the maximum number of RDMAs that can be outstanding on any single RDMA connection. The default value is 8. Tuning Guidelines In testing the default was more than enough on SDR. All performance testing of the parameters was done on OFED 1.1 IB SDR.       verbsrdmaspernode  This is the maximum number of RDMAs that can be outstanding from the node. The default value is 0 (0 means default which is 32). Tuning Guidelines In testing the default was more than enough to keep adapters busy on SDR. All performance testing of the parameters was done on OFED 1.1 IB SDR.       

worker1Threads  
The worker1threads parameter represents the total number of concurrent application requests that can be processed at one time. This may include metadata operations like file stat() requests, open or close and for data operations. The work1threads parameter can be reduced without having to restart the GPFS daemon. Increasing the value of worker1threads requires a restart of the GPFS daemon. To determine whether you have a sufficient number of worker1threads configured you can use the mmfsadm dump mb command. # mmfsadm dump mb | grep Worker1 Worker1Threads: max 48 current limit 48 in use 0 waiting 0 PageDecl: max 131072 in use 0 Using the mmfsadm command you can see how many threads are "in use" and how many application requests are "waiting" for a worker1thread. Tuning Guidelines The default is good for most workloads. You may want to increase worker1threads if your application uses many threads and does Asynchronous IO (AIO) or Direct IO (DIO). In these cases the worker1threads are doing the IO operations. A good place to start is to have worker1theads set to approximately 2 times the number of LUNS in the file system so GPFS can keep the disks busy with parallel requests. The maximum value depends on the sum of worker1Threads + prefetchThreads + nsdMaxWorkerThreads < 8192 on 64bit architectures Do not use excessive values of worker1threads since it may cause contention on common mutexes and locks.     

worker3Threads  
The worker3threads parameter specifies the number of threads to use for inode prefetch. A value of zero disables inode prefetch. The Default is 8. Tuning Guidelines The default is good for most workloads.     writebehindThreshold  The writebehindThreshold parameter determines at what point GPFS starts flushing newly written data out of the pagepool for a file being sequentially written. Until the file size reaches this threshold, no writebehind is started as fullblocks are filled. Increasing this value will defer writebehind for new larger files. This can be useful, for example, if your workload contains temp files that are smaller than writebehindThreshold and are deleted before they are flushed from cache. The default is 512k (524288 bytes). If the value is too large, there may be too many dirty buffers that the sync thread has to flush at the next sync interval causing a surge in disk IO. Keeping it small will ensure a smooth flow of dirty data to disk. Tuning Guidelines The default is good for most workloads. Increase this value if you have a workload where not flushing newly written files larger than 512k would be beneficial.   ignorePrefetchLUNCount  

NOTE: This does not apply to an NSD server doing IO on behalf of other nodes. It also does not affect random access to files or to files smaller than a full block.  On a client node GPFS calculates how many sequential access prefetch/writebehind threads to run concurrently for each filesystem by using the count of the number of LUNs in the filesystem and the maxMBpS setting.  However, if the LUNs being used are really composed of many physical disks this calculation can underestimate how much IO can be done concurrently.  For example, GNR (GSS), XIV, or SVC disk subsystems logically may stripe a LUN to hundreds of disks. (As of 3.4.0.21 or 3.5.0.10) Setting ignorePrefetchLUNCount=yes will ignore the LUN count and only use the maxMBpS setting to dynamically determine how many threads to schedule up to the maxPrefetchThreads setting. Prefetching may become much more aggressive because it depends on the maxMBpS setting and the actual IO times of the last 16 full block IOs for each filesystem. Under heavy loads, the IO times will increase due to queuing on the disks or NSD servers, resulting in GPFS doing more prefetching to try to attain maxMBpS. So set maxMBpS to a reasonable expectation of how much IO bandwidth a client node can get either to directly attached disks or over the network to NSD servers. MaxPrefetchThreads should then be set as a cap on the number of concurrent prefetch/writebehind threads when maxMBpS calculation tries too hard. Tuning Guidelines The default (no) is good for traditional LUNs where one LUN maps to a single disk or n+mP array. Use "yes" when the LUNs presented to GPFS are made up of a large numbers of physical disks.

2017년 9월 11일 월요일

PowerVC 1.3.3 소개



IBM Power System 환경에서 클라우드 환경과의 연계 및 관리를 위해서는 가상화 솔루션과 이를제어하기 위한 PowerVC가 필요합니다. PowerVC 제품은 Openstack Diablo 제품부터 가장 최근에 발표한 Ocata 버전까지 계속해서 6개월 단위로 업데이트를 하고 있습니다. 가장 최신 버전의 PowerVC1.3.3으로 Software Defined Cloud 환경을 구성하기 위한 많은 기능들이 신규로 포함되었으며, 일부는 Technical preview 단계로 운영 시스템 적용 전 기능을 점검을 할 수 있도록 제공하고 있습니다.

소프트웨어 정의 네트워킹

소프트웨어 정의 네트워킹 (SDN)은 컴퓨팅 자원(CPU, Memory )을 가상화하는 것과 비슷한 방식으로 네트워크를 가상화 합니다. SDN을 사용하면 네트워크 환경을 물리적으로 변경하지 않고 네트워크를 배치하고 네트워크 레이아웃을 변경할 수 있습니다. PowerVC PowerVM NovaLink 관리 시스템에서 SDN을 지원합니다.

PowerVC 1.3.3에 통합 된 SDN 기능에는 다음과 같은 기능이 있습니다.
이 기능을 사용하려면 별도의 Redhat linux 서버로 구성된 네트워크 노드가 필요합니다.

    
오버레이 네트워크 지원 ( : VXLAN)
    
오버레이 네트워크를 WAN (Wide Area Network)에 연결하는 가상 라우터 지원
    
WAN의 공용 IP 주소를 오버레이 네트워크의 가상 컴퓨터에 할당하기위한 외부 IP 주소 지원
보안 및 일부 기능은 운영 환경에서는 아직 지원하지 않으며, Technical validation으로 사용 가능

CoD 모바일 메모리를 위한 호스트 메모리 재조정

NUMA 구조의 시스템의 자원 최적화를 위한 동적 자원 재배치 기능인 DRO (Dynamic Resource Optimizer) 기능이 PowerVC에서도 지원합니다. 특히, NovaLink 관리 시스템에서 Capacity on Demand (CoD) 모바일 메모리에 대한 호스트 메모리 재조정을 지원함에 따라 가상 컴퓨터를 마이그레이션하거나 모바일 메모리를 활성화할 때 PowerVC 환경에서 물리적 메모리 배치에 대한 성능 저하 현상을 해결할 수 있습니다.

이메일 보고

Cloud PowerVC Manager 시스템에서 VM 배포 요청이 생성되거나 완료되는 경우와 같이 특정 이벤트가 발생할 때 전자 메일을 보내도록 PowerVC를 구성 할 수 있습니다. 전자 메일은 SMTP 서버를 통해 전송되며 지정된 LDAP 서버에서 검색된 주소를 사용합니다.

관리자는 각 이벤트와 함께 전송되는 전자 메일 메시지를 사용자 지정할 수 있으며 기본적으로 보낼 메시지를 사용자 지정할 수 있습니다. 사용자는 메시지를 보낼 때 자신이 선택한 기본 선택을 무시할 수 있습니다.

사용자 인터페이스에서 더 많은 작업 수행

PowerVC 1.3.3에서는 다음 작업을 GUI에서 수행 할 수 있습니다.

    
프로젝트 정책 설정 - Cloud PowerVC Manager 시스템에서 프로젝트 정책은 셀프 서비스 사용자의 자원 및 작업의 다양한 측면을 제어하는 ​​데 사용됩니다.
    
사용량보기 - Cloud PowerVC Manager 시스템에서 사용자 인터페이스에 대한 사용 정보를 볼 수 있습니다. 이 데이터에는 소유 된 가상 시스템의 수, 할당 된 메모리의 양, 할당 된 가상 프로세서의 수 등이 포함됩니다. 관리자 및 프로젝트 관리자는 홈 페이지에 새로운 리소스 사용 탭을 갖습니다.
    
프로젝트 작업 - 프로젝트 작업에는 프로젝트 추가, 제거 및 변경이 포함됩니다.
    
역할에 대한 작업 - 이제 사용자 인터페이스에서 사용자에게 할당 된 역할을 변경할 수 있습니다.
    
여러 볼륨 삭제 - 여러 볼륨을 선택하고 삭제할 수 있습니다.

프로젝트 할당량 설정

관리자는 사용자 인터페이스에서 프로젝트 내의 리소스 사용량에 대한 할당량을 설정할 수 있습니다. 예를 들어 프로젝트에 존재할 수있는 가상 컴퓨터 수, 프로젝트에서 사용할 수있는 실제 및 가상 프로세서 수, 프로젝트에서 사용할 수있는 볼륨 저장 공간 수 등을 지정할 수 있습니다.

새로운 역할 및 변경된 역할

PowerVC 역할이 요구 사항에 맞게 업데이트되었습니다.

    
더 이상 사용되지 않는 "배포자"역할은 더 이상 지원되지 않습니다. PowerVC가 버전 1.3.3으로 업그레이드 될 때 deployer 역할을 갖는 모든 사용자에게는 이전 배포자 역할 인 vm_manager, storage_manager image_manager와 가장 유사한 다음 역할이 제공됩니다.
    
deployer_restricted 역할의 이름이 "deployer"로 바뀝니다. PowerVC를 버전 1.3.3으로 업그레이드 할 때 deployer_restricted 역할이있는 사용자에게는 deployer 역할이 부여됩니다.
    
새로운 project_manager 역할 (클라우드 PowerVC 관리자 전용). 이 새로운 역할을 통해 사용자는 인프라 세부 정보를 생략 한 단순화 된보기에서 프로젝트의 대부분의 측면을 관리 할 수 ​​있습니다. 예를 들어, 프로젝트 관리자 사용자는 셀프 서비스 사용자의 요청을 승인하거나 거부 할 수 있지만 계산 호스트를 등록에서 제거 할 수는 없습니다.

기존 가상 머신을 다른 프로젝트로 관리

이전에는 가상 시스템을 하나의 프로젝트로 관리하고 가상 시스템을 관리 해제 한 다음 다른 프로젝트로 관리 할 수 ​​없었습니다. 1.3.3 버전에서는 공유 네트워크를 사용할 때이 프로세스가 지원됩니다.

다중 Cisco VSAN 등록

파이버 채널 스위치에서 여러 시스코 가상 저장 영역 네트워크 (VSAN)를 등록 할 수 있습니다.

다중 Brocade 가상 패브릭 등록

fabric 등록 페이지에서 여러 Brocade 가상 패브릭을 등록 할 수 있습니다.
PowerVC 고 가용성 및 재해 복구 솔루션을위한 참조 아키텍처

PowerVC 문서는 하나의 시스템이 활성 상태로 유지되어 데이터의 주기적 활성 백업을 수행 할 수 있도록 두 개 이상의 PowerVC 시스템을 설치함으로써 수행되는 고 가용성 및 재해 복구 솔루션을위한 참조 아키텍처를 제공합니다.
가상 컴퓨터 이름 바꾸기
마침내! 여기에 있습니다. 당신이 말했습니다. 우리는 귀를 기울였습니다! PowerVC에서 가상 머신의 이름을 바꿀 수 있습니다. 관리자, 배포자 또는 vm_manager 역할 할당을 가진 사용자는 가상 컴퓨터의 이름을 바꿀 수 있습니다. 이것은 실제로 하이퍼 바이저에서 가상 머신의 이름을 변경하는 것이 아니라, PowerVC 내에서 이름을 "퍼프 업"하는 간단한 메커니즘을 제공함으로써 주요 유용성 향상입니다!

마이크로 파티셔닝 지원


가상 컴퓨터를 배포하거나 크기를 조정할 때 공유 프로세서를 사용하는 파티션을 처리 장치의 1/20만큼 사용하도록 설정할 수 있습니다 (기본 컴퓨팅 호스트가이 세분성을 지원한다고 가정).

다음 번에는 소트프웨어 정의 네트워크(SDN)에서 좀 더 상세히 확인하려 합니다.