Quantcast
Channel: Lingesh, Author at UnixArena
Viewing all 369 articles
Browse latest View live

NetApp – Clustered DATA ONTAP – FAS Series – Part -3

$
0
0

NetApp FAS unified storage arrays available in wide range of configurations to meet the current business needs. Clustered Data ONTAP operating system is available for all the FAS unified storage arrays. Each of the Netapp FAS platforms can be configured with SATA, SAS, or SSD disk shelves, and shelves can be mixed. This allows you to scale performance and capacity independently. Using NetApp FlexArray technology, you could integrate the third party storage arrays in Clustered Data ONTAP as backed storage. Let’s have look at some of the NetApp FAS Series.

NetApp focuses on two platform architectures, one for the enterprise market segment and one for the entry level.

  • The Enterprise focus is with system performance and scalability (cost is a lower priority), improving serviceability, supporting controller upgrades in place (within a family and between generations) and building towards a future that includes hot-swap IO.
  • The Entry level focus will have a greater priority on balancing the performance of the systems with their cost, one delivering integrated controller and storage, providing more size optimized platforms, and improving simplicity.

 

NetApp’s Enterprise level FAS series starts with FAS8xxx and Entry level FAS series starts with FAS2xxx.

 

Enterprise Segment Platform:

NetApp FAS 8xxx
NetApp FAS 8xxx

 

NetApp Entry Level FAS:

NetApp FAS 25xxx
NetApp FAS 25xxx

 

Let’s look at the FAS controller closely.

NetApp FAS Storage consists the following:

  • NetApp FAS controller .
  • Disk Shelf
  • IOXM – Input/Output extended Module (Optional Component)
  • Clustered DATA ONTAP Operating System (Free  BSD)

 

NetApp FAS Controller:

The below snapshots provides the available ports on FAS 8020 controller.

FAS 8020 NetApp Controller
FAS 8020 NetApp Controller

 

The above diagram shows the single FAS controller. Typically FAS 8020 HA configuration will look like below. In a single-chassis HA pair, both controllers are in the same chassis. The HA interconnect is provided by the internal backplane. No external HA interconnect cabling is required.

Single Chassis HA-Pair
Single Chassis HA-Pair

Note: HA Interconnect and  Cluster interconnect are not the same.

In Clustered DATA ONTAP , We will be aggregating many nodes in to one single cluster. In the upcoming article we will see about clustering.

The post NetApp – Clustered DATA ONTAP – FAS Series – Part -3 appeared first on UnixArena.


NetApp – Clustered DATA ONTAP – Read Operations – Part 4

$
0
0

There is a huge difference between DATA ONTAP 7 Mode and C-Mode on  data access methods.  In 7-Mode, we will be having HA pair controllers to access the data. When the request comes , it will go to any one of the HA controllers.  But in C-Mode , Multiple controllers will be aggregated in the cluster and the request might land on any of the controller irrespective of storage origin. The requested data will go through the cluster-interconnect if the LIF is not hosted on same storage node. In this article ,we will see the Clustered DATA ONTAP architecture and data access methods.

 

1. How Typical Clustered DATA ONTAP six node cluster looks like ?

In the following diagram , we can see that 3 HA pair controllers formed the 6 Node clusters.

NetApp 6 Node cluster
NetApp 6 Node cluster

The NetApp storage system architecture includes multiple components: storage controllers, high-availability
interconnect, multipath high-availability storage connections, disk shelves, system memory, NVRAM, Flash Cache modules, solid-state drive (SSD) aggregates, hard-disk-drive (HDD) aggregates, and flash pools. Storage systems that  run the clustered Data ONTAP operating system also include cluster-interconnect and multiple cluster nodes.

Note: HA Pair = Two controllers interconnect using backplane on single chassis .

 

2. What are the protocols are supported on NetApp controllers ?

The Data ONTAP architecture consists of multiple layers, which are built on top of the FreeBSD Unix operating system. Above the FreeBSD Unix kernel is the data layer that includes the WAFL (Write Anywhere File Layout) file system, RAID,storage, failover, and the protocols for Data ONTAP operating in 7-Mode. Also above the FreeBSD kernel is the NVRAM driver and manager. Above these layers is the NAS and SAN networking layer, which includes protocol support for clustered Data ONTAP. Above the networking layer is the Data ONTAP management layer.

Netapp Data ONTAP - Controller Architecture
NetApp Data ONTAP – Controller Architecture

 

3.  What is the two different type of data access on Clustered Data ONTAP ?

  • Direct Access  (7-Mode & C-Mode)
  • In-direct Access  (Only on C-Mode)

Both clustered Data ONTAP and Data ONTAP operating in 7-Mode support direct data access; however, only
clustered Data ONTAP supports indirect data access.

 

Clustered Data ONTAP Data Access
Clustered Data ONTAP Data Access

Scenario :1  

Direct Access : – When the client is trying to access the data and LIF is sitting on the node which own the storage disks shelf.

Scenario :2

In-Direct Access: –  When the client is trying to access the data and LIF is sitting on the node which doesn’t own the storage disks shelf.  In this case, data will pass through the cluster interconnect.

 

Indirect data access enables you to scale workloads across multiple nodes. The latency between direct and indirect data access is negligible, provided that CPU headroom exists. Throughput can be affected by indirect data access, because additional processing might be required to move data over the cluster-interconnect.

 

Data Access type is protocol dependent. SAN data access can be direct or indirect depending on path selected by
Asymmetric Logical Unit Access (ALUA). NFS data access can be direct or indirect, except that pNFS is always direct.
CIFS data access can be either direct or indirect.

 

Let’s have a closer look of Direct Data Access .

Direct Data Access - Read Operations
Direct Data Access – Read Operations

1. The read request is sent from the host to the storage system via a network interface card (NIC) (For ISCSI/NFS/CIFS) or a host bus adapter (HBA)  (For FC-SAN).
2. If the read is in system memory, it is sent data to the host; otherwise, keep looking for the data in the storage.
3. Flash Cache is checked (if it is present) and, if the blocks are present, they are brought into memory and then sent
to the host; otherwise, keep looking for the data within the storage.
4. Finally the block is read from storage, brought into memory, and then sent to the host.

 

Let’s have a closer look of Indirect Data Access .

Indirect Data Access - read Operations
Indirect Data Access – read Operations

Read operations for indirect data access take the following path through the storage system:

1. The read request is sent from the host to the storage system via a NIC (NFS/CIFS/ISCSI)or an HBA(SAN-FC).
2. The read request is sent to the storage controller that owns the volume.
3. If the read is in system memory, it is sent to the host; otherwise, keep looking for the data on that storage controller.
4. Flash Cache (if it is present) is checked and, if the blocks are present, they are brought into memory and then sent
to the host; otherwise, keep looking for the data.
5. Finally the block is read from storage, brought into memory, and then sent to the host.

 

Hope this article is informative to you.  In the next artcile ,we will see about the write operation on Netapp Clustered  DATA ONTAP.

The post NetApp – Clustered DATA ONTAP – Read Operations – Part 4 appeared first on UnixArena.

NetApp – Clustered DATA ONTAP – Write Operations – Part 5

$
0
0

This article is going to explain about the Netapp write operations. In Clustered Data ONTAP , write request might land on any of the cluster node irrespective of the storage owners. So that data write operations will be either direct access or indirect access. The Write requests will not send to the disks immediately until CP ( consistency point) occurs . NVRAM is another component of Netapp which keeps the redo logs. NVRAM provides a safety net for the time between the acknowledgement of a client-write request and the commitment of the data to disk.

 

WRITE Operation (Direct Data Access):

NetApp - Write Operations - Direct Access
NetApp – Write Operations – Direct Access

 

Write operations for direct access take the following path through the storage system:

  1. The write request is sent to the storage system from the host via a NIC or an HBA.
  2. The write is simultaneously processed into system memory and logged in NVRAM and in the NVRAM mirror the partner node of the HA pair.
  3. The write is acknowledged to the host.
  4. The write is sent to storage in a consistency point (CP).

 

WRITE OPERATIONS (INDIRECT DATA ACCESS)

NetApp - Write Operations - Indirect Data Access
NetApp – Write Operations – Indirect Data Access

 

Write operations for indirect data access take the following path through the storage system:

  1. The write request is sent to the storage system from the host  via a NIC or an HBA.
  2. The write is processed and redirected (via the cluster-interconnect) to the storage controller that owns the volume.
  3. The write is simultaneously processed into system memory and logged in NVRAM and in the NVRAM mirror of the partner node of the HA pair.
  4. The write is acknowledged to the host.
  5. The write is sent to storage in a CP.

 

WRITE OPERATIONS (FLASH POOL SSD CACHE):

Netapp Write Operations - Flash pool SSD cache
NetApp Write Operations – Flash pool SSD cache

 

Write operations that involve the SSD cache take the following path through the storage system:

  1. The write request is sent  to the storage system from the host via a NIC or an HBA.
  2. The write is simultaneously processed into system memory and logged in NVRAM and in the NVRAM mirror of the partner node of the HA pair.
  3. The write is acknowledged to the host.
  4. The system determines whether the random write is a random overwrite.
  5. A random overwrite is sent to the SSD cache; a random write is sent to the HDD.
  6. Writes that are sent to the SSD cache are eventually evicted from the cache to the disks, as determined by the
    eviction process).

 

Consistency Point (CP):

Consistency point will occurs on following scenarios:

  • The write requests will send to the memory (which act as write cache) and once the NVRAM buffers  fills up , it will flush the write to the disk.
  • A Ten second timer runs out.
  • A Resource is exhausted or hits a predefined  scenario, and it is time to flush the writes to disk.

What will happen if the back to back CP happens ?

  • As an NVRAM first buffer reaches its capacity , it signals to memory to flush the writes to the disk.
  • If the second buffer reaches capacity while writes are still being sent to disk from first buffer , the CP can’t occur . The CP can occur only after the first flush of writes is complete.

 

NVRAM :

  • The write requests will sent to disk from memory (Not from NVRAM) in a CP . So NVRAM is not a write buffer.
  • It is battery backed Memory to keep the redo logs in-case of system power failure or crash.
  • Double-buffered journal of write operations
  • It  is  mirrored between storage controller in a HA Pair.
  • Writes in system memory that are logged in NVRAM.  It mirrored  and persistent.
  • It is used only for writes not for reads
  • It stores redo log or short term translation logs which is typically less than 20 seconds.
  • It is used only on system crash or power failure. Otherwise , it will not looked at again.
  • It enables rapid acknowledgement of client-write requests.
  • It is very fast and will not cause any performance issues.

 

Hope this article is informative to you . Share it ! Comment !!  Be Sociable !!!

The post NetApp – Clustered DATA ONTAP – Write Operations – Part 5 appeared first on UnixArena.

NetApp – Configuring the Two Node Cluster – Part 6

$
0
0

This article will demonstrates that how to configure the NetApp (Clustered ONTAP) two node cluster. Since I am going to use the simulator for this demonstration, we can’t virtualize the HA pair controllers. I will use two separate nodes to form the cluster with switch-less configuration(Assuming that direct cluster inter-connect cabling is done between nodes). In an order to form the cluster , we need to create the cluster on first node and join the remaining nodes.  We have to make sure that all the systems are in sync with NTP source to  prevents CIFS and Kerberos failures.

Note: Both the controllers are booted in C-Mode.

 

 Creating the cluster on the first node:

 

1.Login to the first controller using serial console  or IP address if you have already configure the Node IP. In this case, I have connected the first node using serial port.

 

2. Login as admin with no password.

 

3. Once you have reached the node shell , execute command “cluster setup” .

::> cluster setup

 

4. You will get wizard like below. Enter “create” to create a new cluster.

Welcome to the cluster setup wizard.

You can enter the following commands at any time:
  "help" or "?" - if you want to have a question clarified,
  "back" - if you want to change previously answered questions, and
  "exit" or "quit" - if you want to quit the cluster setup wizard.
     Any changes you made before quitting will be saved.

You can return to cluster setup at any time by typing "cluster setup".
To accept a default or omit a question, do not enter a value.


Do you want to create a new cluster or join an existing cluster? {create, join}: create

 

5.When prompted about using the node as a single node cluster, reply no because this will be a
multi-node cluster.

Do you intend for this node to be used as a single node cluster? {yes, no} [no]:
no

 

6.Since its a simulator,We are going to accept the default. Enter yes to accept the default values for cluster network configuration.

System Defaults:
Private cluster network ports [e0a,e0b].
Cluster port MTU values will be set to 1500.
Cluster interface IP addresses will be automatically generated.
Do you want to use these defaults? {yes, no} [yes]: yes

 

7.Enter the cluster Name .

Step 1 of 5: Create a Cluster
You can type "back", "exit", or "help" at any question.

Enter the cluster name: NetUA

 

8. Enter the license key for the node.

Enter the cluster base license key: XXXXXXXXXXXXXXXXXXXXXXXXX

Creating cluster NetUA

Network set up .....

 

9. Just press “Enter” to continue if you don;t want to add additional license keys at this moment.

Step 2 of 5: Add Feature License Keys
You can type "back", "exit", or "help" at any question.

Enter an additional license key []:

 

10. Set the Cluster vServer admin password.

Step 3 of 5: Set Up a Vserver for Cluster Administration
You can type "back", "exit", or "help" at any question.

Enter the cluster administrator's (username "admin") password:

Retype the password:

New password must be at least 8 characters long.

You can type "back", "exit", or "help" at any question.

Enter the cluster administrator's (username "admin") password:

Retype the password:

 

11.Enter the port and IP details for the cluster LIF.

Enter the cluster management interface port [e0c]:
Enter the cluster management interface IP address: 192.168.0.101
Enter the cluster management interface netmask: 255.255.255.0
Enter the cluster management interface default gateway: 192.168.0.1

A cluster management interface on port e0c with IP address 192.168.0.101 has been created.  You can use this address to connect to and manage the cluster.

 

12. Enter the DNS details and Name server IP.

Enter the DNS domain names: learn.netapp.local
Enter the name server IP addresses: 192.168.0.11
DNS lookup for the admin Vserver will use the learn.netapp.local domain.

 

13. We will skip the SFO since simulator will not support this feature.

Step 4 of 5: Configure Storage Failover (SFO)
You can type "back", "exit", or "help" at any question.


SFO will not be enabled on a non-HA system.

14. Set the Node location.

Step 5 of 5: Set Up the Node
You can type "back", "exit", or "help" at any question.

Where is the controller located []: BLR

 

15. Configure the node Management LIF.

Enter the node management interface port [e0f]:
Enter the node management interface IP address: 192.168.0.91
Enter the node management interface netmask: 255.255.255.0
Enter the node management interface default gateway: 192.168.0.1
A node management interface on port e0f with IP address 192.168.0.91 has been created.

 

16. After configuring the node management LIF, it will automatically logoff.

Cluster setup is now complete.

To begin storing and serving data on this cluster, log in to the command-line
interface (for example, ssh admin@192.168.0.101) and complete the following
additional tasks if they have not already been completed:

- Join additional nodes to the cluster by running "cluster setup" on
  those nodes.
- For HA configurations, verify that storage failover is enabled by
  running the "storage failover show" command.
- Create a Vserver by running the "vserver setup" command.


In addition to using the CLI to perform cluster management tasks, you can manage
your cluster using OnCommand System Manager, which features a graphical user
interface that simplifies many cluster management tasks. This software is
available from the NetApp Support Site.

Exiting the cluster setup wizard.

Fri Nov 27 21:35:23 UTC 2015

 

17. Login back as admin with newly created password and check the cluster status.

login: admin
Password:
NetUA::>
NetUA::> cluster show
Node                  Health  Eligibility
--------------------- ------- ------------
NetUA-01              true    true

NetUA::>

We have successfully created new cluster using first controller .

 

Joining the second node on Cluster:

1.Login to the second controller using serial console  or IP address if you have already configure the Node IP. In this case, I have connected the second node using serial port.

2. Login as admin with no password.

3. Once you have reached the node shell , execute command “cluster setup” .

::> cluster setup

 

4. You will wizard like below. Enter “join” to join with newly created cluster.

Welcome to the cluster setup wizard.

You can enter the following commands at any time:
  "help" or "?" - if you want to have a question clarified,
  "back" - if you want to change previously answered questions, and
  "exit" or "quit" - if you want to quit the cluster setup wizard.
     Any changes you made before quitting will be saved.

You can return to cluster setup at any time by typing "cluster setup".
To accept a default or omit a question, do not enter a value.


Do you want to create a new cluster or join an existing cluster? {create, join}: join

 

5. Accept the defaults and continue.

System Defaults:
Private cluster network ports [e0a,e0b].
Cluster port MTU values will be set to 1500.
Cluster interface IP addresses will be automatically generated.

Do you want to use these defaults? {yes, no} [yes]: yes

It can take several minutes to create cluster interfaces...

 

6.System will automatically scan using the cluster interconnect and it will provide the cluster name to join.

Step 1 of 3: Join an Existing Cluster
You can type "back", "exit", or "help" at any question.

Enter the name of the cluster you would like to join [NetUA]:
NetUA

Joining cluster NetUA

Starting cluster support services ....

This node has joined the cluster NetUA.

 

7. System will automatically skips the SFO.

Step 2 of 3: Configure Storage Failover (SFO)
You can type "back", "exit", or "help" at any question.


SFO will not be enabled on a non-HA system.

 

8. Configure the node management LIF.

Step 3 of 3: Set Up the Node
You can type "back", "exit", or "help" at any question.

Enter the node management interface port [e0f]:
Enter the node management interface IP address: 192.168.0.92
Enter the node management interface netmask [255.255.255.0]:
Enter the node management interface default gateway [192.168.0.1]:

A node management interface on port e0f with IP address 192.168.0.92 has been created.

 

9. Once you have completed the node management LIF configuration, system will automatically logoff.

Cluster setup is now complete.

To begin storing and serving data on this cluster, log in to the command-line
interface (for example, ssh admin@192.168.0.101) and complete the following
additional tasks if they have not already been completed:

- Join additional nodes to the cluster by running "cluster setup" on
  those nodes.
- For HA configurations, verify that storage failover is enabled by
  running the "storage failover show" command.
- Create a Vserver by running the "vserver setup" command.


In addition to using the CLI to perform cluster management tasks, you can manage
your cluster using OnCommand System Manager, which features a graphical user
interface that simplifies many cluster management tasks. This software is
available from the NetApp Support Site.

Exiting the cluster setup wizard.

Fri Nov 27 21:43:52 UTC 2015
login:

 

10. Login to the node 2 using user “admin” and check the cluster status.

login: admin
Password:
NetUA::> cluster show
Node                  Health  Eligibility
--------------------- ------- ------------
NetUA-01              true    true
NetUA-02              true    true
2 entries were displayed.

NetUA::>

 

11. Check the network configuration on Node2.

NetUA::> network interface show
            Logical    Status     Network            Current       Current Is
Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
----------- ---------- ---------- ------------------ ------------- ------- ----
NetUA
            cluster_mgmt up/up    192.168.0.101/24   NetUA-01      e0c     true
NetUA-01
            clus1        up/up    169.254.81.224/16  NetUA-01      e0a     true
            clus2        up/up    169.254.220.127/16 NetUA-01      e0b     true
            mgmt1        up/up    192.168.0.91/24    NetUA-01      e0f     true
NetUA-02
            clus1        up/up    169.254.124.94/16  NetUA-02      e0a     true
            clus2        up/up    169.254.244.74/16  NetUA-02      e0b     true
            mgmt1        up/up    192.168.0.92/24    NetUA-02      e0f     true
7 entries were displayed.

NetUA::>

 

12. Check the network configuration on Node1.

NetUA::> network interface show
            Logical    Status     Network            Current       Current Is
Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
----------- ---------- ---------- ------------------ ------------- ------- ----
NetUA
            cluster_mgmt up/up    192.168.0.101/24   NetUA-01      e0c     true
NetUA-01
            clus1        up/up    169.254.81.224/16  NetUA-01      e0a     true
            clus2        up/up    169.254.220.127/16 NetUA-01      e0b     true
            mgmt1        up/up    192.168.0.91/24    NetUA-01      e0f     true
NetUA-02
            clus1        up/up    169.254.124.94/16  NetUA-02      e0a     true
            clus2        up/up    169.254.244.74/16  NetUA-02      e0b     true
            mgmt1        up/up    192.168.0.92/24    NetUA-02      e0f     true
7 entries were displayed.

NetUA::>

 

We have successfully setup the two node NetApp cluster.

 

Hope this article is informative to you. Share it ! Comment it !! Be Sociable !!!

The post NetApp – Configuring the Two Node Cluster – Part 6 appeared first on UnixArena.

NetApp – Clustered DATA ONTAP – License Management – Part 7

$
0
0

In NetApp Data ONTAP , you must buy license keys to enable the additional features.  NetApp Data ONTAP 8.2 onwards ,all the licesne keys are 28 characters in length.  In Clustered Data ONTAP , you must keep the license entitlement same across the all the nodes. This will ensure that fail-over will happen without any issues. In this article we are going to see that how to manage the license codes on NetApp Clustered Data ONTAP 8.2. This includes Add/Remove/display the license keys using the cluster shell.

NetApp On-Command graphical utility will help you to manage the license keys from graphical window. Cluster – > Cluster_name- > Configuration – > System Tools – > License

Note: – NetApp License are based on the system serial number.

1.Login to the cluster Management LIF as admin user.  (ssh session)

Cluster Management LIF - Netapp
Cluster Management LIF – Netapp

 

Once you have logged in , you will get the cluster shell like below.

login as: admin
Using keyboard-interactive authentication.
Password:
NetUA::>

2.Check the cluster serial number.

NetUA::> cluster identity show

          Cluster UUID: 69a95be8-XXXX-11e5-8987-XXXXXXXXXXXXXX
          Cluster Name: NetUA
 Cluster Serial Number: 1-80-XXXXXX
      Cluster Location: BLR
       Cluster Contact:

NetUA::>

 

3. Check the Netapp controllers serial number, use the following command.

NetUA::> system node show -fields node,serialnumber
node     serialnumber
-------- ------------
NetUA-01 40XX432-XX-X
NetUA-02 40XX389-XX-X
2 entries were displayed.

NetUA::>

 

4. Navigate to the license hierarchy and check the available options.

NetUA::> license

NetUA::system license> ?
  add                         Add one or more licenses
  clean-up                    Remove unnecessary licenses
  delete                      Delete a license
  show                        Display licenses
  status>                     Display license status

NetUA::system license>

Note: To know the available option, just type “?” in the cluster shell at any time.

 

Checking the License status:

1. Check the currently installed licenses. At this time ,we just have the base license installed on the system.

NetUA::system license> show

Serial Number: 1-80-XXXXXXX
Owner: NetUA
Package           Type    Description           Expiration
----------------- ------- --------------------- --------------------
Base              license Cluster Base License  -

NetUA::system license>

 

2. To see the available features that can enable and complete license summary, use the following command.

NetUA::system license> status show
Package            Licensed Method  Expiration
-----------------  ---------------  --------------------
Base               license          -
NFS                none             -
CIFS               none             -
iSCSI              none             -
FCP                none             -
CDMI               none             -
SnapRestore        none             -
SnapMirror         none             -
FlexClone          none             -
SnapVault          none             -
SnapLock           none             -
SnapManagerSuite   none             -
SnapProtectApps    none             -
V_StorageAttach    none             -
SnapLock_Enterprise
                   none             -
Insight_Balance    none             -
16 entries were displayed.

NetUA::system license>

 

Adding the new License key:

 

1. Let’s add the license for iSCSI and check the status.

NetUA::system license> add -license-code XXXXKTJWXXXXBGXAGAAAAAAXXXX
License for package "iSCSI" and serial number "1-81-0000000000000004079432749" i                                                                     nstalled successfully.
(1 of 1 added successfully)

NetUA::system license> status show
Package            Licensed Method  Expiration
-----------------  ---------------  --------------------
Base               license          -
NFS                none             -
CIFS               none             -
iSCSI              license          -
FCP                none             -
CDMI               none             -
SnapRestore        none             -
SnapMirror         none             -
FlexClone          none             -
SnapVault          none             -
SnapLock           none             -
SnapManagerSuite   none             -
SnapProtectApps    none             -
V_StorageAttach    none             -
SnapLock_Enterprise
                   none             -
Insight_Balance    none             -
16 entries were displayed.

NetUA::system license>

The previous command summary states that iSCSI feature is enabled. We have two nodes in the cluster. Is it enabled for both the nodes ? Let’s check

NetUA::system license> show

Serial Number: 1-80-000008
Owner: NetUA
Package           Type    Description           Expiration
----------------- ------- --------------------- --------------------
Base              license Cluster Base License  -

Serial Number: 1-81-0000000000000004079432749
Owner: NetUA-01
Package           Type    Description           Expiration
----------------- ------- --------------------- --------------------
iSCSI             license iSCSI License         -
2 entries were displayed.

NetUA::system license>

We can see that iSCSI feature is just enabled for NetUA-01 node. We must enable the iSCSI feature for node 2 as well.Let’s add the license key for Node 2.

NetUA::system license> add -license-code XXXXLUNFXMXXXXEZFAAAAAAXXXX
License for package "iSCSI" and serial number "1-81-0000000000000004034389062" installed successfully.
(1 of 1 added successfully)


NetUA::system license> show

Serial Number: 1-80-000008
Owner: NetUA
Package           Type    Description           Expiration
----------------- ------- --------------------- --------------------
Base              license Cluster Base License  -

Serial Number: 1-81-0000000000000004034389062
Owner: NetUA-02
Package           Type    Description           Expiration
----------------- ------- --------------------- --------------------
iSCSI             license iSCSI License         -

Serial Number: 1-81-0000000000000004079432749
Owner: NetUA-01
Package           Type    Description           Expiration
----------------- ------- --------------------- --------------------
iSCSI             license iSCSI License         -
3 entries were displayed.

NetUA::system license>

We can see that iSCSI feature has been enabled for both the cluster nodes.

 

2. To add the multiple license keys , use the following command.

NetUA::system license> add -license-code XXXXXXXWOZNBBGXAGAAAAAAAXXXXX,XXXXXXNFXMSMUCEZFAAAAAAAXXXX
License for package "SnapMirror" and serial number "1-81-0000000000000004079432749" installed successfully.
License for package "SnapMirror" and serial number "1-81-0000000000000004034389062" installed successfully.
(2 of 2 added successfully)


NetUA::system license> show

Serial Number: 1-80-000008
Owner: NetUA
Package           Type    Description           Expiration
----------------- ------- --------------------- --------------------
Base              license Cluster Base License  -

Serial Number: 1-81-0000000000000004034389062
Owner: NetUA-02
Package           Type    Description           Expiration
----------------- ------- --------------------- --------------------
SnapMirror        license SnapMirror License    -

Serial Number: 1-81-0000000000000004079432749
Owner: NetUA-01
Package           Type    Description           Expiration
----------------- ------- --------------------- --------------------
SnapMirror        license SnapMirror License    -
3 entries were displayed.

NetUA::system license>

 

To Remove the NetApp feature License:

 

1. To remove the license keys , you must specify the node’s serial number in which you want to remove the feature. Here we are  removing the iSCSI license on both the nodes.

NetUA::system license> delete -serial-number 1-81-0000000000000004079432749 -package iSCSI

Warning: The following license will be removed:
         iSCSI               1-81-0000000000000004079432749
Do you want to continue? {y|n}: y

NetUA::system license> show

Serial Number: 1-80-000008
Owner: NetUA
Package           Type    Description           Expiration
----------------- ------- --------------------- --------------------
Base              license Cluster Base License  -

Serial Number: 1-81-0000000000000004034389062
Owner: NetUA-02
Package           Type    Description           Expiration
----------------- ------- --------------------- --------------------
iSCSI             license iSCSI License         -
2 entries were displayed.

NetUA::system license> delete -serial-number 1-81-0000000000000004034389062 -package iSCSI

Warning: The following license will be removed:
         iSCSI               1-81-0000000000000004034389062
Do you want to continue? {y|n}: y

NetUA::system license> show

Serial Number: 1-80-000008
Owner: NetUA
Package           Type    Description           Expiration
----------------- ------- --------------------- --------------------
Base              license Cluster Base License  -

NetUA::system license>

 

How to clean-up the unused and expired license on NetApp ?

You can delete the un-used and expired license using the delete command. But in huge environment, we have option to clean-up the unused and expired license in one shot.

1. Just type clean-up and use the “?” to see the available options.

NetUA::system license> clean-up ?
  [[-unused] [true]]          Remove unused licenses
  [ -expired [true] ]         Remove expired licenses
  [ -simulate|-n [true] ]     Simulate Only

NetUA::system license> clean-up

 

2. Let’s simulate for unused license keys on our cluster.

NetUA::system license> clean-up -simulate true -unused
The following licenses can be cleaned up:

Serial number: 1-81-2580252174352410562389062
Owner: none
Package                   Reason
------------------------- -----------------------------------------------------
iSCSI                     Serial number is not used by any node in the cluster

NetUA::system license>

The above listed license can be cleaned up since its not used for any nodes.

 

3.To check the expired licence , use the following command.

NetUA::system license> clean-up -simulate true -expired
No license to clean-up.
NetUA::system license>

 

4. Let’s clean-up the unused license . (Based on Step 2)

NetUA::system license> clean-up -unused true
1 unused license deleted.
NetUA::system license>
NetUA::system license> clean-up -simulate true -unused
No license to clean-up.
NetUA::system license>

The same way you can remove the expired license if you have any.

 

Hope this article informative to you.  Share it ! Comment it !! Be Sociable !!!

The post NetApp – Clustered DATA ONTAP – License Management – Part 7 appeared first on UnixArena.

NetApp – Clustered DATA ONTAP – Shells & Directories – Part 8

$
0
0

The one of the most famous proverb about the Unix systems is – “Where there is a shell there’s a way” .  If you want to directly interact with system kernel, you need a shell. NetApp Clustered Data ONTAP uses the Free BSD operating system on the controllers. You can manage the resources from cluster shell (CLI) or OnCommand GUI . The CLI and the GUI provide access to the same information, and you can use both to manage the same resources within a cluster. Command line is always remains powerful than GUI.

 The hierarchical command structure consists of command directories and commands. A command directory might contain commands, more command directories, or both. In this way, command directories resemble file system directories and file structures. Command directories provide groupings of similar commands. For example, all commands for storage-related actions fall somewhere within the storage command directory. Within that directory are directories for disk commands and aggregate commands.

 

Shells:

  • Cluster Shell – ng shell which is used to manage the entire cluster.
  • Node shell – A subset of the Data  ONTAP 7G and Data ONTAP 7-Mode commands. Using node shell, you can manage a single node.
  • System Shell –  You have option to access the BSD shell of the controller.

 

Cluster Shell:

  • The cluster shell is accessible from a cluster management logical interface (LIF).
  • root user is not permitted .
  • The admin user is predefined  with a password that is chosen during the cluster setup
  • ssh is the default method for non-console logins.

 

Let’s access the cluster shell using the cluster management LIF.  (ssh to the cluster IP)

1.Use the cluster management LIF to login to the cluster shell.

login as: admin
Using keyboard-interactive authentication.
Password:
NetUA::>

 

2. Just enter the “?” to know the available commands.

NetUA::> ?
  up                          Go up one directory
  cluster>                    Manage clusters
  dashboard>                  Display dashboards
  event>                      Manage system events
  exit                        Quit the CLI session
  history                     Show the history of commands for this CLI session
  job>                        Manage jobs and job schedules
  lun>                        Manage LUNs
  man                         Display the on-line manual pages
  network>                    Manage physical and virtual network connections
  qos>                        QoS settings
  redo                        Execute a previous command
  rows                        Show/Set the rows for this CLI session
  run                         Run interactive or non-interactive commands in the node shell
  security>                   The security directory
  set                         Display/Set CLI session settings
  sis                         Manage volume efficiency
  snapmirror>                 Manage SnapMirror
  statistics>                 Display operational statistics
  storage>                    Manage physical storage, including disks, aggregates, and failover
  system>                     The system directory
  top                         Go to the top-level directory
  volume>                     Manage virtual storage, including volumes, snapshots, and mirrors
  vserver>                    Manage Vservers

NetUA::>

 

3.Just navigate to the cluster directory and see the available options.

NetUA::cluster> ?
  contact-info>               Manage contact information for the cluster.
  create                      Create a cluster
  date>                       Manage cluster's date and time setting
  ha>                         Manage high-availability configuration
  identity>                   Manage the cluster's attributes, including name and serial number
  join                        Join an existing cluster using the specified member's IP address
  modify                      Modify cluster node membership attributes
  peer>                       Manage cluster peer relationships
  setup                       Setup wizard
  show                        Display cluster node members
  statistics>                 Display cluster statistics

NetUA::cluster>

 

4. Cluster shell has three privilege levels.
* admin
* advanced
* diag

To change the privilege level from “admin” to “advanced” , use the following command.

NetUA::cluster> set -privilege advanced

Warning: These advanced commands are potentially dangerous; use them only when directed to do so by NetApp personnel.
Do you want to continue? {y|n}: y

 

To change the current mode to “diag”, use the following command.

NetUA::cluster*> set -privilege diag

Warning: These diagnostic commands are for use by NetApp personnel only.
Do you want to continue? {y|n}: y

NetUA::cluster*>

* – Indicates that shell is in advanced mode or diag mode.

 

To change shell mode to “admin” , use the following command.

NetUA::cluster*> set -privilege admin

NetUA::cluster>

 

Node Shell:

  • The node shell can be accessed within the cluster.
  • You can access the node shell in interactive mode or directly execute the commands from the cluster shell.
  • This is similar to 7-Mode CLI
  • Scope is limited to one node at a time.
  • Useful to get he data about the node.
  • visibility to only those objects that are attached to the given controller. (disks , aggregates , volumes)

 

1. To access the Node shell in interactive mode, use the following command. You can back to cluster shell any time by pressing control+D .

NetUA::> system node run -node NetUA-01
Type 'exit' or 'Ctrl-D' to return to the CLI
NetUA-01> hostname
NetUA-01
NetUA-01>

 

2. To access the Node shell within the cluster shell ,

NetUA::> system node run -node NetUA-01 hostname
NetUA-01
NetUA::>

 

3.Node shell is very useful to see the node related configuration.

NetUA-01> sysconfig
        NetApp Release 8.2 Cluster-Mode: Tue May 21 05:58:22 PDT 2013
        System ID: 4079432749 (NetUA-01)
        System Serial Number: 4079432-74-9 (NetUA-01)
        System Storage Configuration: Multi-Path
        System ACP Connectivity: NA
        slot 0: System Board
                Processors:         2
                Memory Size:        1599 MB
                Memory Attributes:  None
        slot 0: 10/100/1000 Ethernet Controller V
                e0a MAC Address:    00:0c:29:e5:c3:ce (auto-1000t-fd-up)
                e0b MAC Address:    00:0c:29:e5:c3:d8 (auto-1000t-fd-up)
                e0c MAC Address:    00:0c:29:e5:c3:e2 (auto-1000t-fd-up)
                e0d MAC Address:    00:0c:29:e5:c3:ec (auto-1000t-fd-up)
                e0e MAC Address:    00:0c:29:e5:c3:f6 (auto-1000t-fd-up)
                e0f MAC Address:    00:0c:29:e5:c3:00 (auto-1000t-fd-up)
NetUA-01>

 

System Shell:

  • The system shell is accessed from the  cluster shell or the from the node using “diag” user.
  • User “diag” must be unlocked to access the system shell.
  • You will get the BSD Unix prompt once you have logged in as diag.

 

You can use the system shell to access the BSD environment that the Data ONTAP operating system runs in. You should access the system shell only under the supervision of NetApp technical support. You can access the system shell only as the “diag” user and only from within the cluster shell. Root access to the system shell is not available from Data ONTAP clusters.

 

Let’s see how to access the system shell.

1.Login to the cluster LIF using admin user.

2.Unlock the diag user.

NetUA::> security login unlock -username diag

NetUA::> 

 

3.Set the password for diag user.

NetUA::> security login password -username diag

Enter a new password:
Enter it again:

NetUA::>

 

4. Try to access the system shell of node1.

NetUA::> system node systemshell -node NetUA-01

Error: "systemshell" is not a recognized command

NetUA::>

System couldn’t find the systemshell command. To access the systemshell , you must be in the advanced shell.

 

5.Set the privileged level to advanced.

NetUA::> set advanced

Warning: These advanced commands are potentially dangerous; use them only when directed to do so by NetApp personnel.
Do you want to continue? {y|n}: y

NetUA::*>

 

6. Try to access the system shell of node1 using diag user.

NetUA::*> system node systemshell -node NetUA-01

Data ONTAP/amd64 (NetUA-01) (pts/3)

login: diag
Password:
Last login: Thu Sep 26 10:17:55 from localhost


Warning:  The system shell provides access to low-level
diagnostic tools that can cause irreparable damage to
the system if not used properly.  Use this environment
only when directed to do so by support personnel.

NetUA-01%

 

7. Let’s execute some Unix commands.

NetUA-01% df -h
Filesystem                           Size    Used   Avail Capacity  Mounted on
/dev/md0                             3.3M    3.3M     55K    98%    /
devfs                                1.0K    1.0K      0B   100%    /dev
/dev/ad0s2                           1.0G    366M    658M    36%    /cfcard
/dev/md1.uzip                        611M    420M    191M    69%    /
/dev/md2.uzip                         89M     70M     19M    79%    /platform
/dev/ad3                             242G    3.1G    220G     1%    /sim
/dev/ad1s1                           5.0M    1.3M    3.3M    29%    /var
procfs                               4.0K    4.0K      0B   100%    /proc
/dev/md3                              31M    202K     31M     1%    /tmp
localhost:0x80000000,0xac3e9b52      851M    407M    444M    48%    /mroot
clusfs                               488M    488M      0B   100%    /clus
/mroot/etc/cluster_config/vserver    851M    407M    444M    48%    /mroot/vserver_fs
NetUA-01% ifconfig -a
lo0: flags=80c9<UP,LOOPBACK,RUNNING,NOARP,MULTICAST> metric 0 mtu 8232
        options=3<RXCSUM,TXCSUM>
        inet 127.0.0.1 netmask 0xff000000  LOOPBACKLIF Vserver ID: 0
        inet6 ::1 prefixlen 128  LOOPBACKLIF Vserver ID: 0
        nd6 options=3<PERFORMNUD,ACCEPT_RTADV>
lofb: flags=60088eb<UP,BROADCAST,LOOPBACK,SMART,RUNNING,NOARP,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=28<VLAN_MTU,JUMBO_MTU>
        ether 12:34:78:56:34:12
        inet 127.0.10.1 netmask 0xfffff000 broadcast 127.0.15.255 LOOPBACKLIF Vserver ID: 0
        media: Ethernet PSEUDO Device ()
default-ipspace: flags=60088aa<BROADCAST,LOOPBACK,SMART,NOARP,SIMPLEX,MULTICAST> metric 1 mtu 1500
        options=28<VLAN_MTU,JUMBO_MTU>
        ether 12:34:00:00:00:00
        media: Ethernet PSEUDO Device ()
default-ipspace_partner: flags=60088aa<BROADCAST,LOOPBACK,SMART,NOARP,SIMPLEX,MULTICAST> metric 1 mtu 1500
        options=28<VLAN_MTU,JUMBO_MTU>
        ether 12:34:75:00:00:00
        media: Ethernet PSEUDO Device ()
localhost_c169.254.0.0/16: flags=60088eb<UP,BROADCAST,LOOPBACK,SMART,RUNNING,NOARP,SIMPLEX,MULTICAST> metric 30 mtu 9000
        options=28<VLAN_MTU,JUMBO_MTU>
        ether 12:34:01:00:00:00
        inet 169.254.220.127 netmask 0xffff0000 broadcast 169.254.255.255 CLUSTERLIF Vserver ID: 0
        inet 169.254.81.224 netmask 0xffff0000 broadcast 169.254.255.255 CLUSTERLIF Vserver ID: 0
        media: Ethernet PSEUDO Device ()
NetUA-01_n192.168.0.0/24: flags=60088eb<UP,BROADCAST,LOOPBACK,SMART,RUNNING,NOARP,SIMPLEX,MULTICAST> metric 10 mtu 9000
        options=28<VLAN_MTU,JUMBO_MTU>
        ether 12:34:03:00:00:00
        inet 192.168.0.91 netmask 0xffffff00 broadcast 192.168.0.255 NODEMGMTLIF Vserver ID: 0
        media: Ethernet PSEUDO Device ()
NetUA_c192.168.0.0/24: flags=60088eb<UP,BROADCAST,LOOPBACK,SMART,RUNNING,NOARP,SIMPLEX,MULTICAST> metric 20 mtu 9000
        options=28<VLAN_MTU,JUMBO_MTU>
        ether 12:34:02:00:00:00
        inet 192.168.0.101 netmask 0xffffff00 broadcast 192.168.0.255 CSERVERMGMTLIF Vserver ID: -1
        media: Ethernet PSEUDO Device ()
NetUA-01%

 

Hope this article is informative to you. Share it ! Comment !! Be Sociable !!!

The post NetApp – Clustered DATA ONTAP – Shells & Directories – Part 8 appeared first on UnixArena.

NetApp – Clustered DATA ONTAP – Storage Aggregate – Part 9

$
0
0

Storage systems should provide the fault tolerance if any  disk failures occurs in disk-shelf’s. NetApp uses the RAID-DP technology to provide the fault tolerance. A RAID group includes several disks that are linked together in a storage system. Although there are different implementations of RAID, Data ONTAP supports only RAID 4 and RAID-DP. Data ONTAP classifies disks as one of four types for RAID: data, hot spare, parity, or double-parity. The RAID disk  type is determined by how RAID is using a disk.

 

Data disk A data disk is part of a RAID group and stores data on behalf of the client.
Hot spare disk  A hot spare disk does not hold usable data but is available to be added to a RAID group in an aggregate. Any functioning disk that is not assigned to an aggregate, but is assigned to a system, functions as a hot spare disk.
Parity disk A parity disk stores data reconstruction within a RAID group.
Double-parity disk A double-parity disk stores double-parity information within RAID groups if NetApp RAID  software, double-parity (RAID-DP) is enabled.

 

RAID-4 :

RAID-4 Protects the data from single disk failure. It requires minimum three disk to configure.  (2 – Data disks & 1 – Parity disk)

Using RAID 4, if one disk block goes bad, the parity disk in that disk’s RAID group is used to recalculate the data in the  failed block, and then the block is mapped to a new location on the disk. If an entire disk fails, the parity disk prevents any data from being lost. When the failed disk is replaced, the parity disk is used to automatically recalculate its contents. This is sometimes referred to as row parity.

NetApp RAID 4
NetApp RAID 4

 

RAID-DP:

RAID-DP technology protects against data loss due to a double-disk failure within a RAID group.

Each RAID-DP group contains the following:

  • Three data disks
  • One parity disk
  • One double-parity disk

RAID-DP employs the traditional RAID 4 horizontal row parity. However, in RAID-DP, a diagonal parity stripe is
calculated and committed to the disks when the row parity is written.

NetApp RAID-DP
NetApp RAID-DP

 

 

RAID GROUP MAXIMUMS:

Here is the RAID group Maximums for NetApp Storage systems. RAID groups can include anywhere from 3 to 28 disks, depending on the platform and RAID type. For best performance and reliability, NetApp recommends using the default RAID group size.

NetApp RAID Maximums

NetApp RAID Maximums

 

Aggregates: 

An aggregate is virtual layer of  RAID groups. RAID is built using the bunch of physical disks . Aggregates can use RAID-4 raid groups or RAID-DP raid groups. These aggregates can be taken over by the HA partner  if the controller fails. Aggregates can be grown up by adding the physical disks to it (In the back-end , it will form the RAID groups).  There are two type of aggregates possible in NetApp.

  • 32-Bit Aggregate
  • 64-Bit Aggregate

At any time, you can convert the 32-Bit Aggregate to 64-Bit aggregate without any downtime. 64-Bit aggregate supports more than 16TB of storage.

 

Let’s see the storage aggregate commands.

1.Login  to the NetApp Cluster’s  LIF.

2.List the available storage aggregates on the system.

NetUA::> storage aggregate show
Aggregate     Size Available Used% State   #Vols  Nodes            RAID Status
--------- -------- --------- ----- ------- ------ ---------------- ------------
aggr0_01     900MB   43.54MB   95% online       1 NetUA-01         raid_dp,
                                                                   normal
aggr0_02     900MB   43.54MB   95% online       1 NetUA-02         raid_dp,
                                                                   normal
2 entries were displayed.

NetUA::>

 

3.List the Provisioned volumes with aggregate names . vol0 resides on aggregate “aggr0_01”.

NetUA::> volume show
Vserver   Volume       Aggregate    State      Type       Size  Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
NetUA-01  vol0         aggr0_01     online     RW      851.5MB    431.0MB   49%
NetUA-02  vol0         aggr0_02     online     RW      851.5MB    435.9MB   48%
2 entries were displayed.

NetUA::>

 

4.List the available disks on Node NetUA-01. Here you can see that what are the disks are part of aggregate.

NetUA::> storage disk show -owner NetUA-01
                     Usable           Container
Disk                   Size Shelf Bay Type        Position   Aggregate Owner
---------------- ---------- ----- --- ----------- ---------- --------- --------
NetUA-01:v4.16       1020MB     -   - spare       present    -         NetUA-01
NetUA-01:v4.17       1020MB     -   - spare       present    -         NetUA-01
NetUA-01:v4.18       1020MB     -   - spare       present    -         NetUA-01
NetUA-01:v4.19       1020MB     -   - spare       present    -         NetUA-01
NetUA-01:v4.20       1020MB     -   - spare       present    -         NetUA-01
NetUA-01:v4.21       1020MB     -   - spare       present    -         NetUA-01
NetUA-01:v4.22       1020MB     -   - spare       present    -         NetUA-01
NetUA-01:v4.24       1020MB     -   - spare       present    -         NetUA-01
NetUA-01:v4.25       1020MB     -   - spare       present    -         NetUA-01
NetUA-01:v4.26       1020MB     -   - spare       present    -         NetUA-01
NetUA-01:v4.27       1020MB     -   - spare       present    -         NetUA-01
NetUA-01:v4.28       1020MB     -   - spare       present    -         NetUA-01
NetUA-01:v4.29       1020MB     -   - spare       present    -         NetUA-01
NetUA-01:v4.32       1020MB     -   - spare       present    -         NetUA-01
NetUA-01:v5.16       1020MB     -   - aggregate   dparity    aggr0_01  NetUA-01
NetUA-01:v5.17       1020MB     -   - aggregate   parity     aggr0_01  NetUA-01
NetUA-01:v5.18       1020MB     -   - aggregate   data       aggr0_01  NetUA-01
NetUA-01:v5.19       1020MB     -   - spare       present    -         NetUA-01
NetUA-01:v5.20       1020MB     -   - spare       present    -         NetUA-01
NetUA-01:v5.21       1020MB     -   - spare       present    -         NetUA-01
NetUA-01:v5.22       1020MB     -   - spare       present    -         NetUA-01
NetUA-01:v5.24       1020MB     -   - spare       present    -         NetUA-01
NetUA-01:v5.25       1020MB     -   - spare       present    -         NetUA-01
NetUA-01:v5.26       1020MB     -   - spare       present    -         NetUA-01
NetUA-01:v5.27       1020MB     -   - spare       present    -         NetUA-01
NetUA-01:v5.28       1020MB     -   - spare       present    -         NetUA-01
NetUA-01:v5.29       1020MB     -   - spare       present    -         NetUA-01
NetUA-01:v5.32       1020MB     -   - spare       present    -         NetUA-01
NetUA-01:v6.16      520.5MB     -   - spare       present    -         NetUA-01
NetUA-01:v6.17      520.5MB     -   - spare       present    -         NetUA-01
NetUA-01:v6.18      520.5MB     -   - spare       present    -         NetUA-01
NetUA-01:v6.19      520.5MB     -   - spare       present    -         NetUA-01
NetUA-01:v6.20      520.5MB     -   - spare       present    -         NetUA-01
NetUA-01:v6.21      520.5MB     -   - spare       present    -         NetUA-01
NetUA-01:v6.22      520.5MB     -   - spare       present    -         NetUA-01
NetUA-01:v6.24      520.5MB     -   - spare       present    -         NetUA-01
NetUA-01:v6.25      520.5MB     -   - spare       present    -         NetUA-01
NetUA-01:v6.26      520.5MB     -   - spare       present    -         NetUA-01
NetUA-01:v6.27      520.5MB     -   - spare       present    -         NetUA-01
NetUA-01:v6.28      520.5MB     -   - spare       present    -         NetUA-01
NetUA-01:v6.29      520.5MB     -   - spare       present    -         NetUA-01
NetUA-01:v6.32      520.5MB     -   - spare       present    -         NetUA-01
42 entries were displayed.

NetUA::>

 

5.Let’s look at the specific aggregate’s configuration.

NetUA::> stor aggr show -aggr aggr0_01
  (storage aggregate show)

                                         Aggregate: aggr0_01
                                    Checksum Style: block
                                   Number Of Disks: 3
                                             Nodes: NetUA-01
                                             Disks: NetUA-01:v5.16,
                                                    NetUA-01:v5.17,
                                                    NetUA-01:v5.18
                           Free Space Reallocation: off
                                         HA Policy: cfo
                Space Reserved for Snapshot Copies: -
                                    Hybrid Enabled: false
                                    Available Size: 43.49MB
                                  Checksum Enabled: true
                                   Checksum Status: active
                                  Has Mroot Volume: true
                     Has Partner Node Mroot Volume: false
                                           Home ID: 4079432749
                                         Home Name: NetUA-01
                           Total Hybrid Cache Size: 0B
                                            Hybrid: false
                                      Inconsistent: false
                                 Is Aggregate Home: true
                                     Max RAID Size: 16
       Flash Pool SSD Tier Maximum RAID Group Size: -
                                          Owner ID: 4079432749
                                        Owner Name: NetUA-01
                                   Used Percentage: 95%
                                            Plexes: /aggr0_01/plex0
                                       RAID Groups: /aggr0_01/plex0/rg0 (block)
                                       RAID Status: raid_dp, normal
                                         RAID Type: raid_dp
                                           Is Root: true
      Space Used by Metadata for Volume Efficiency: 0B
                                              Size: 900MB
                                             State: online
                                         Used Size: 856.5MB
                                 Number Of Volumes: 1
                                      Volume Style: flex

NetUA::>

aggr0_01 is configured using “/aggr0_01/plex0/rg0” .

 

Let’s have a close look of rg0.

NetUA::> system node run -node NetUA-01 aggr status aggr0_01 -r

Aggregate aggr0_01 (online, raid_dp) (block checksums)
  Plex /aggr0_01/plex0 (online, normal, active)
    RAID group /aggr0_01/plex0/rg0 (normal, block checksums)

      RAID Disk Device  HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)
      --------- ------  ------------- ---- ---- ---- ----- --------------    --------------
      dparity   v5.16   v5    ?   ?   FC:B   -  FCAL 15000 1020/2089984      1027/2104448
      parity    v5.17   v5    ?   ?   FC:B   -  FCAL 15000 1020/2089984      1027/2104448
      data      v5.18   v5    ?   ?   FC:B   -  FCAL 15000 1020/2089984      1027/2104448

NetUA::>

Aggregate Name = aggr0_01
Node Name = NetUA-01

We have explored more things about existing aggregate. Let’s see that how to create the new aggregate.

 

Creating the New Aggregate:

1. To create the aggregate with name “NetUA01_aggr1” on node “NetUA-01” with 5 FCAL disks , use the following command.

NetUA::> stor aggr create -aggr NetUA01_aggr1 -node NetUA-01 -diskcount 5 -disktype FCAL
  (storage aggregate create)
[Job 80] Job succeeded: DONE

NetUA::>

 

2. Verify the newly created aggregate.

NetUA::> storage aggregate show
Aggregate     Size Available Used% State   #Vols  Nodes            RAID Status
--------- -------- --------- ----- ------- ------ ---------------- ------------
NetUA01_aggr1
            2.64GB    2.64GB    0% online       0 NetUA-01         raid_dp,
                                                                   normal
aggr0_01     900MB   39.08MB   96% online       1 NetUA-01         raid_dp,
                                                                   normal
aggr0_02     900MB   43.54MB   95% online       1 NetUA-02         raid_dp,
                                                                   normal
3 entries were displayed.

NetUA::>

 

Adding new disks to Aggregate:

1.Add two disks to the aggregate “NetUA01_aggr1” .

NetUA::> aggr add-disks -aggr NetUA01_aggr1 -disktype FCAL -diskcount 2

 

2.Verify the storage size.

NetUA::>  storage aggregate show
Aggregate     Size Available Used% State   #Vols  Nodes            RAID Status
--------- -------- --------- ----- ------- ------ ---------------- ------------
NetUA01_aggr1
            4.39GB    4.39GB    0% online       0 NetUA-01         raid_dp,
                                                                   normal
aggr0_01     900MB   43.51MB   95% online       1 NetUA-01         raid_dp,
                                                                   normal
aggr0_02     900MB   43.54MB   95% online       1 NetUA-02         raid_dp,
                                                                   normal
3 entries were displayed.

 

3. Verify the newly added disks.

NetUA::> aggr show -aggr NetUA01_aggr1

Aggregate: NetUA01_aggr1
Checksum Style: block
Number Of Disks: 7
Nodes: NetUA-01
Disks: NetUA-01:v4.16,
NetUA-01:v5.19,
NetUA-01:v4.17,
NetUA-01:v5.20,
NetUA-01:v4.18,
NetUA-01:v5.21,
NetUA-01:v4.19
Free Space Reallocation: off
HA Policy: sfo
Space Reserved for Snapshot Copies: -
Hybrid Enabled: false
Available Size: 4.39GB
Checksum Enabled: true
Checksum Status: active
Has Mroot Volume: false
Has Partner Node Mroot Volume: false
Home ID: 4079432749
Home Name: NetUA-01
Total Hybrid Cache Size: 0B
Hybrid: false
Inconsistent: false
Is Aggregate Home: true
Max RAID Size: 16
Flash Pool SSD Tier Maximum RAID Group Size: -
Owner ID: 4079432749
Owner Name: NetUA-01
Used Percentage: 0%
Plexes: /NetUA01_aggr1/plex0
RAID Groups: /NetUA01_aggr1/plex0/rg0 (block)
RAID Status: raid_dp, normal
RAID Type: raid_dp
Is Root: false
Space Used by Metadata for Volume Efficiency: 0B
Size: 4.39GB
State: online
Used Size: 180KB
Number Of Volumes: 0
Volume Style: flex

NetUA::>

 

4. Check the RAID status.

NetUA::> system node run -node NetUA-01 aggr status NetUA01_aggr1 -r

Aggregate NetUA01_aggr1 (online, raid_dp) (block checksums)
  Plex /NetUA01_aggr1/plex0 (online, normal, active)
    RAID group /NetUA01_aggr1/plex0/rg0 (normal, block checksums)

      RAID Disk Device  HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)
      --------- ------  ------------- ---- ---- ---- ----- --------------    --------------
      dparity   v4.16   v4    ?   ?   FC:B   -  FCAL 15000 1020/2089984      1027/2104448
      parity    v5.19   v5    ?   ?   FC:B   -  FCAL 15000 1020/2089984      1027/2104448
      data      v4.17   v4    ?   ?   FC:B   -  FCAL 15000 1020/2089984      1027/2104448
      data      v5.20   v5    ?   ?   FC:B   -  FCAL 15000 1020/2089984      1027/2104448
      data      v4.18   v4    ?   ?   FC:B   -  FCAL 15000 1020/2089984      1027/2104448
      data      v5.21   v5    ?   ?   FC:B   -  FCAL 15000 1020/2089984      1027/2104448
      data      v4.19   v4    ?   ?   FC:B   -  FCAL 15000 1020/2089984      1027/2104448

NetUA::>

 

We have just created the aggregate layer. In the upcoming articles,  we will see that how to create the vserver and flex volumes.

 

Hope this article is informative to you.

The post NetApp – Clustered DATA ONTAP – Storage Aggregate – Part 9 appeared first on UnixArena.

NetApp – Clustered DATA ONTAP – Storage Failover – SFO – Part 10

$
0
0

In NetApp Storage system, RAID feature will provide the fault tolerance in-case of any  disk failures. But what will happen if the controller(node) itself  fails ? NetApp ships the controllers as HA pairs (Two controllers in one chassis ). If one node fails , automatically other controller will take over the storage.  Enabling storage failover (SFO) is done within pairs, regardless of how many nodes are in the cluster. For SFO (storage failover), the HA pairs must be of the same model. The cluster itself can contain a mixture of models, but each HA pair must be homogeneous. The version of the Data ONTAP operating system must be the same on both nodes of the HA pair, except for the short period of time during which the pair is upgraded. Two HA interconnect cables are required to connect the NVRAM cards (except for FAS and V-Series 32×0 models with single-enclosure HA).  The storage failover(SFO) can be enabled on either node in the pair. Storage Failover(SFO) can be initiated from any node in the cluster.

Cluster high availability (HA) is activated automatically when you enable storage failover on clusters that consist of two nodes, and you should be aware that automatic giveback is enabled by default. On clusters that consist of more than two nodes, automatic giveback is disabled by default, and cluster HA is disabled automatically.

Let’s have closer look at HA Pairs:

HA pair controllers are connected to each other through an HA interconnect. This allows one node to serve data that
resides on the disks of its failed partner node. Each node continually monitors its partner, mirroring the data for each other’s nonvolatile memory (NVRAM or NVMEM). The interconnect is internal and requires no external cabling if both  controllers are in the same chassis.

NetApp HA Pairs
NetApp HA Pairs

 

HA pairs are components of the cluster, and both nodes in the HA pair are connected to other nodes in the cluster
through the data and cluster networks. But only the nodes in the HA pair can take over each other’s storage. Non-HA nodes are not supported in a cluster that contains two or more nodes. Although single-node clusters are supported,  joining two single-node clusters to create one cluster is not supported, unless you wipe clean one of the single-node  clusters and join it to the other to create a two-node cluster that consists of an HA pair.

Let’s see that what will happen during the unplanned event,

  1. Assume that Node1 and Node 2 own their root and data aggregates.
  2. If Node1 fails ,
  3. Node2 takeover root and data aggregates of Node1 .
Unplanned SFO Netapp
Unplanned SFO Netapp

When a node fails, an unplanned event or automatic takeover is initiated (8.2 and prior). Ownership of data  aggregates  is changed to the HA partner. After the ownership is changed, the partner can read and write to the volumes on the partner’s data aggregates. Ownership of aggr0 disks remain with the failed node, but the partner takes over control of the aggregate which can be mounted from the partner for diagnostic purposes.

Giveback :

  1. Automatic or Manual giveback is initiated with storage failover giveback command.
  2. Aggr0 is given back to node 1 to boot the node.
  3. Data aggregate giveback occurs one aggregate at a time.

Giveback is initiated by the storage failover giveback command or by automatic giveback if the system is configured for it. The node must have access to its root volume on aggr0 to fully boot. The CFO HA policy ensures that aggr0 is given back immediately to the allow the node to boot. After the node has fully booted, the partner node returns ownership of the data aggregates one at a time until giveback is complete. You can monitor the progress of the giveback with the storage failover show-giveback command. I/O resumes for each aggregate when giveback is complete for that aggregate, thereby reducing the overall outage window of each aggregate.

Aggregation Relocation: (ARL)

Aggregate relocation operations take advantage of the HA configuration to move the ownership of storage aggregates
within the HA pair. Aggregate relocation occurs automatically during manually initiated takeover and giveback operations to reduce downtime during maintenance. Aggregate relocation can be initiated manually for load balancing. Aggregate relocation cannot move ownership of the root aggregate.

During a manually initiated takeover, before the target controller is taken over, ownership of each aggregate that belongs to the target controller is moved to the partner controller one aggregate at a time. When giveback is initiated, the ownership is automatically moved back to the original node. To suppress aggregate relocation during the takeover, use the -bypass-optimization parameter with the storage failover takeover command.

Planned Event in ONTAP  8.2 with ARL:

When a node takes over its partner, it continues to serve and update data in the partner’s aggregates and volumes. To do this, it takes ownership of the partner’s data aggregates, and the partner’s LIFs migrate according to network interface failover rules.

ONTAP 8.2 ARL
ONTAP 8.2 ARL

 

What is the difference between NetApp CFO and SFO ?

  • Root Aggregates are always assigned to CFO (controller Failover) policy.
  • Data Aggregates are assigned to SFO (Storage Failover policy)

 

Check the HA Pair status :

cluster::> storage failover show
                    Takeover
Node      Partner   Possible State
--------- --------- -------- --------------------------------
A         B         true     Connected to B
B         A         true     Connected to A

 

Check the  aggregate’s failover policy on the cluster nodes.

NetUA::> aggregate show -node NetUA-01 -fields ha-policy
  (storage aggregate show)
aggregate     ha-policy
------------- ---------
NetUA01_aggr1 sfo
aggr0_01      cfo
2 entries were displayed.

NetUA::> aggregate show -node NetUA-02 -fields ha-policy
  (storage aggregate show)
aggregate ha-policy
--------- ---------
aggr0_02  cfo

NetUA::>

Aggr0_xx represents the root volume of controller node.So the failover policy will be set to CFO always. All the data aggregate storage policy has been set to SFO.

Note: We should not store any data volumes on aggr0.

 

The following commands will help you to identify the failover policy for specific node.

NetUA::> storage failover show -node NetUA-01
NetUA::> storage failover show -node NetUA-02

NetUA-01 & NetUA-02  are HA node names.

 

To disables auto giveback on the HA nodes, use the following command.

NetUA::> storage failover modify -node NetUA-01 -auto-giveback false
NetUA::> storage failover modify -node NetUA-02 -auto-giveback false

 

To enables the auto giveback on HA nodes, use the following command.

NetUA::> storage failover modify -node NetUA-01 -auto-giveback true
NetUA::> storage failover modify -node NetUA-02 -auto-giveback true

 

To initiate the failover, use the following command.

NetUA::storage failover> storage failover takeover -ofnode  NetUA-02

or 

NetUA::storage failover> storage failover takeover -bynode   NetUA-01
NetUA::storage failover>

You can use either one of the above command to take over the NetUA-02 node’s storage.

Please read the failover man page carefully to know the available option .

Note: { -ofnode {|local} - Node to Takeover
This specifies the node that is taken over. It is shut down and its partner takes over its storage.

| -bynode {|local} } - Node Initiating Takeover
This specifies the node that is to take over its partner's storage.

[-option ] - Takeover Option
This optionally specifies the style of takeover operation. Possible values include the following:

[-bypass-optimization {true|false}] - Bypass Takeover Optimization
If this is an operator-initiated planned takeover, this parameter specifies whether the takeover optimization is bypassed. This parameter defaults to false.

[-skip-lif-migration [true]] - Skip LIF Migration
This parameter specifies that LIF migration prior to takeover is skipped. Without this parameter, the command attempts to synchronously migrate data and cluster management LIFs away from the node prior to its takeover. If the migration fails or times out, the takeover is aborted.

o normal - Specifies a normal takeover operation; that is, the partner is given the time to close its storage resources gracefully before
the takeover operation proceeds. This is the default value.

o immediate - Specifies an immediate takeover. In an immediate takeover, the takeover operation is initiated before the partner is given the time to close its storage resources gracefully. The use of this option results in an immediate takeover which does not do a clean shutdown. In case of NDU this can result in a NDU failure.

Attention: If this option is specified, negotiated takeover optimization is bypassed even if the -bypass-optimization option is set to false.

o allow-version-mismatch - If this value is specified, the takeover operation is initiated even if the partner is running a version of software that is incompatible with the version running on the node. In this case, the partner is given the time to close its storage resources gracefully before the takeover operation proceeds. Use this value as part of a non-disruptive upgrade procedure.

o force - If this value is specified, the takeover operation is initiated even if the node detects an error that normally prevents a takeover operation from occurring. This value is available only at the advanced privilege level and higher.

Attention: If this option is specified, negotiated takeover optimization is bypassed even if the -bypass-optimization option is set to false.
Caution: The use of this option can potentially result in data loss. If the HA interconnect is detached or inactive, or the contents of the failover partner's NVRAM cards are unsynchronized, takeover is normally disabled. Using the -force option enables a node to take over its partner's storage despite the unsynchronized NVRAM, which can contain client data that can be lost upon storage takeover.

 

SFO summary - NetApp

SFO summary – NetApp

Hope you got the fair idea about the storage failover on NetApp clustered Data ONTAP.

The post NetApp – Clustered DATA ONTAP – Storage Failover – SFO – Part 10 appeared first on UnixArena.


NetApp – Clustered DATA ONTAP – Data Vserver (SVM)- Part 11

$
0
0

NetApp Clustered data ONTAP consists three type of vServers, which is helping in managing the node, cluster and data access to the clients.

  1. Node Vserver  – Responsible to Manage the nodes. It automatically creates when the node joins the cluster.
  2. Admin Vserver  – Responsible to Manage the entire cluster. It automatically creates during the cluster setup.
  3. Data Vserver – cluster administrator must create data Vservers and add volumes to these Vservers to facilitate data access from the cluster. A cluster must have at least one data Vserver to serve data to its clients.

A Data virtual storage server (Vserver) contains data volumes and one or more LIFs through which it serves data to the clients. Data  vServer can contain the Flex Volume or Infinite volume. The data vServer securely isolates the shared virtualized data storage and network, and appears as a single dedicated server to its clients. Each Vserver has a separate administrator authentication domain and can be managed independently by a Vserver administrator. A cluster can have one or more Vservers with FlexVol volumes and Vservers with Infinite Volumes.

Vserver:

1.Login to the cluster LIF and check the existing Vserver.

NetUA::> vserver show
                    Admin     Root                  Name    Name
Vserver     Type    State     Volume     Aggregate  Service Mapping
----------- ------- --------- ---------- ---------- ------- -------
NetUA       admin   -         -          -          -       -
NetUA-01    node    -         -          -          -       -
NetUA-02    node    -         -          -          -       -
3 entries were displayed.

NetUA::>

The existing Vservers are created once you configure the cluster.  We need to configure the data Vserver for clients.

 

2.Check the existing volumes on the cluster.

NetUA::> volume show
Vserver   Volume       Aggregate    State      Type       Size  Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
NetUA-01  vol0         aggr0_01     online     RW      851.5MB    421.8MB   50%
NetUA-02  vol0         aggr0_02     online     RW      851.5MB    421.0MB   50%
2 entries were displayed.

NetUA::>

 

3. Check the available aggregates on the cluster.

NetUA::> storage aggregate show
Aggregate     Size Available Used% State   #Vols  Nodes            RAID Status
--------- -------- --------- ----- ------- ------ ---------------- ------------
NetUA01_aggr1
            4.39GB    4.39GB    0% online       0 NetUA-01         raid_dp,
                                                                   normal
aggr0_01     900MB   43.54MB   95% online       1 NetUA-01         raid_dp,
                                                                   normal
aggr0_02     900MB   43.54MB   95% online       1 NetUA-02         raid_dp,
                                                                   normal
3 entries were displayed.

NetUA::>

 

4.Create a data Vserver named ua_vs1 and provide the root volume name as “ua_vs1_root”.

NetUA::> vserver create -vserver ua_vs1 -rootvolume ua_vs1_root -aggregate NetUA01_aggr1 -ns-switch file -rootvolume-security-style unix
[Job 103] Job succeeded:
Vserver creation completed

5.List the Vservers again.

NetUA::> vserver show
                    Admin     Root                  Name    Name
Vserver     Type    State     Volume     Aggregate  Service Mapping
----------- ------- --------- ---------- ---------- ------- -------
NetUA       admin   -         -          -          -       -
NetUA-01    node    -         -          -          -       -
NetUA-02    node    -         -          -          -       -
ua_vs1      data    running   ua_vs1_    NetUA01_   file    file
                              root       aggr1
4 entries were displayed.

NetUA::>

We can see that new data Vserver has been created successfully and aggregate “NetUA01_aggr1” has been mapped to it.

 

6. List the volumes again.

NetUA::> vol show
  (volume show)
Vserver   Volume       Aggregate    State      Type       Size  Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
NetUA-01  vol0         aggr0_01     online     RW      851.5MB    420.5MB   50%
NetUA-02  vol0         aggr0_02     online     RW      851.5MB    420.4MB   50%
ua_vs1    ua_vs1_root  NetUA01_aggr1
                                    online     RW         20MB    18.89MB    5%
3 entries were displayed.

NetUA::>

You can see that ua_vs1_root volume has been created with size of 20MB. This is Data SVM ua_vs1’s root filesystem.

 

7. Check the vServer properties.

NetUA::> vserver show -vserver ua_vs1

                                    Vserver: ua_vs1
                               Vserver Type: data
                               Vserver UUID: d1ece3f0-9b76-11e5-b3cd-123478563412
                                Root Volume: ua_vs1_root
                                  Aggregate: NetUA01_aggr1
                        Name Service Switch: file
                        Name Mapping Switch: file
                                 NIS Domain: -
                 Root Volume Security Style: unix
                                LDAP Client: -
               Default Volume Language Code: C.UTF-8
                            Snapshot Policy: default
                                    Comment:
                 Antivirus On-Access Policy: default
                               Quota Policy: default
                List of Aggregates Assigned: -
 Limit on Maximum Number of Volumes allowed: unlimited
                        Vserver Admin State: running
                          Allowed Protocols: nfs, cifs, fcp, iscsi, ndmp
                       Disallowed Protocols: -
            Is Vserver with Infinite Volume: false
                           QoS Policy Group: -

NetUA::>

 

If you just want to allow the NFS protocol for this Vserver, you can modify using the following command.

NetUA::> vserver modify -vserver ua_vs1 -allowed-protocols nfs

NetUA::> vserver show -vserver ua_vs1

                                    Vserver: ua_vs1
                               Vserver Type: data
                               Vserver UUID: d1ece3f0-9b76-11e5-b3cd-123478563412
                                Root Volume: ua_vs1_root
                                  Aggregate: NetUA01_aggr1
                        Name Service Switch: file
                        Name Mapping Switch: file
                                 NIS Domain: -
                 Root Volume Security Style: unix
                                LDAP Client: -
               Default Volume Language Code: C.UTF-8
                            Snapshot Policy: default
                                    Comment:
                 Antivirus On-Access Policy: default
                               Quota Policy: default
                List of Aggregates Assigned: -
 Limit on Maximum Number of Volumes allowed: unlimited
                        Vserver Admin State: running
                          Allowed Protocols: nfs
                       Disallowed Protocols: cifs, fcp, iscsi, ndmp
            Is Vserver with Infinite Volume: false
                           QoS Policy Group: -

NetUA::>

 

8. Check the junction path of the root volume.

NetUA::> volume show -vserver ua_vs1  -volume ua_vs1_root  -fields junction-path
vserver volume      junction-path
------- ----------- -------------
ua_vs1  ua_vs1_root /

NetUA::>

 

9. How to access this  data Vserver ?  To access the SVM (data Vserver), you need to create the data LIF . Let’s create the NAS data LIF for SVM ua_vs1 . The NFS & CIFS clients will use this IP to access the shares.

NetUA::> net int create -vserver ua_vs1 -lif uadata1 -role data -home-node NetUA-01  -home-port e0c -address 192.168.0.123 -netmask 255.255.255.0
  (network interface create)

NetUA::>

 

10. Review the newly created data LIF for “ua_vs1” SVM.

NetUA::> net int show
  (network interface show)
            Logical    Status     Network            Current       Current Is
Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
----------- ---------- ---------- ------------------ ------------- ------- ----
NetUA
            cluster_mgmt up/up    192.168.0.101/24   NetUA-01      e0f     false
NetUA-01
            clus1        up/up    169.254.81.224/16  NetUA-01      e0a     true
            clus2        up/up    169.254.220.127/16 NetUA-01      e0b     true
            mgmt1        up/up    192.168.0.91/24    NetUA-01      e0f     true
NetUA-02
            clus1        up/up    169.254.124.94/16  NetUA-02      e0a     true
            clus2        up/up    169.254.244.74/16  NetUA-02      e0b     true
            mgmt1        up/up    192.168.0.92/24    NetUA-02      e0f     true
ua_vs1
            uadata1      up/up    192.168.0.123/24   NetUA-01      e0c     true
8 entries were displayed.

NetUA::>

 

11.To see the detailed information of LIF, use the following command.

NetUA::> net int show -vserver ua_vs1 -lif uadata1
  (network interface show)

                    Vserver Name: ua_vs1
          Logical Interface Name: uadata1
                            Role: data
                   Data Protocol: nfs, cifs, fcache
                       Home Node: NetUA-01
                       Home Port: e0c
                    Current Node: NetUA-01
                    Current Port: e0c
              Operational Status: up
                 Extended Status: -
                         Is Home: true
                 Network Address: 192.168.0.123
                         Netmask: 255.255.255.0
             Bits in the Netmask: 24
                 IPv4 Link Local: -
              Routing Group Name: d192.168.0.0/24
           Administrative Status: up
                 Failover Policy: nextavail
                 Firewall Policy: data
                     Auto Revert: false
   Fully Qualified DNS Zone Name: none
         DNS Query Listen Enable: false
             Failover Group Name: system-defined
                        FCP WWPN: -
                  Address family: ipv4
                         Comment: -

NetUA::>

 

12. NFS and CIFS clients might be in the other network than the DATA LIF network. So you might require to configure the default router for data LIF to reach the NFS & CIFS clients. Review the automatically created routing groups.

NetUA::> network routing-groups show
          Routing
Vserver   Group     Subnet          Role         Metric
--------- --------- --------------- ------------ -------
NetUA
          c192.168.0.0/24
                    192.168.0.0/24  cluster-mgmt      20
NetUA-01
          c169.254.0.0/16
                    169.254.0.0/16  cluster           30
          n192.168.0.0/24
                    192.168.0.0/24  node-mgmt         10
NetUA-02
          c169.254.0.0/16
                    169.254.0.0/16  cluster           30
          n192.168.0.0/24
                    192.168.0.0/24  node-mgmt         10
ua_vs1
          d192.168.0.0/24
                    192.168.0.0/24  data              20
6 entries were displayed.

NetUA::>

 

13.View the static routes that were automatically created for you within their respective routing groups.

NetUA::> network routing-groups route show
          Routing
Vserver   Group     Destination     Gateway         Metric
--------- --------- --------------- --------------- ------
NetUA
          c192.168.0.0/24
                    0.0.0.0/0       192.168.0.1     20
NetUA-01
          n192.168.0.0/24
                    0.0.0.0/0       192.168.0.1     10
NetUA-02
          n192.168.0.0/24
                    0.0.0.0/0       192.168.0.1     10
3 entries were displayed.

NetUA::>

 

14.Create a static route for the routing group that was automatically created when you created LIF “uadata1”.

NetUA::> net routing-groups route create -vserver ua_vs1  -routing-group d192.168.0.0/24 -destination 0.0.0.0/0 -gateway 192.168.0.1
  (network routing-groups route create)

NetUA::>

 

15. Review the static route of ua_vs1.

NetUA::> network routing-groups route show
          Routing
Vserver   Group     Destination     Gateway         Metric
--------- --------- --------------- --------------- ------
NetUA
          c192.168.0.0/24
                    0.0.0.0/0       192.168.0.1     20
NetUA-01
          n192.168.0.0/24
                    0.0.0.0/0       192.168.0.1     10
NetUA-02
          n192.168.0.0/24
                    0.0.0.0/0       192.168.0.1     10
ua_vs1
          d192.168.0.0/24
                    0.0.0.0/0       192.168.0.1     20
4 entries were displayed.

NetUA::>

16. The LIF has been created on NetUA-01 . What will happen if NetUA-01 fails ? . By default , LIF will be assinged to default failover groups.

Here ua_vs1’s data LIF is hosted on NetUA-01 and it is the home node for that LIF.

NetUA::> net int show
  (network interface show)
            Logical    Status     Network            Current       Current Is
Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
----------- ---------- ---------- ------------------ ------------- ------- ----
NetUA
            cluster_mgmt up/up    192.168.0.101/24   NetUA-01      e0f     false
NetUA-01
            clus1        up/up    169.254.81.224/16  NetUA-01      e0a     true
            clus2        up/up    169.254.220.127/16 NetUA-01      e0b     true
            mgmt1        up/up    192.168.0.91/24    NetUA-01      e0f     true
NetUA-02
            clus1        up/up    169.254.124.94/16  NetUA-02      e0a     true
            clus2        up/up    169.254.244.74/16  NetUA-02      e0b     true
            mgmt1        up/up    192.168.0.92/24    NetUA-02      e0f     true
ua_vs1
            uadata1      up/up    192.168.0.123/24   NetUA-01      e0c     true
8 entries were displayed.

NetUA::>

 

17.Show the current LIF failover groups and view the targets defined for the data and management LIFs.

NetUA::> net int failover show
  (network interface failover show)
         Logical         Home                  Failover        Failover
Vserver  Interface       Node:Port             Policy          Group
-------- --------------- --------------------- --------------- ---------------
NetUA
         cluster_mgmt    NetUA-01:e0c          nextavail       system-defined
                         Failover Targets: NetUA-01:e0c, NetUA-01:e0d,
                                           NetUA-01:e0e, NetUA-01:e0f,
                                           NetUA-02:e0c, NetUA-02:e0d,
                                           NetUA-02:e0e, NetUA-02:e0f
NetUA-01
         clus1           NetUA-01:e0a          nextavail       system-defined
         clus2           NetUA-01:e0b          nextavail       system-defined
         mgmt1           NetUA-01:e0f          nextavail       system-defined
                         Failover Targets: NetUA-01:e0f
NetUA-02
         clus1           NetUA-02:e0a          nextavail       system-defined
         clus2           NetUA-02:e0b          nextavail       system-defined
         mgmt1           NetUA-02:e0f          nextavail       system-defined
                         Failover Targets: NetUA-02:e0f
ua_vs1
         uadata1         NetUA-01:e0c          nextavail       system-defined
                         Failover Targets: NetUA-01:e0c, NetUA-01:e0d,
                                           NetUA-01:e0e, NetUA-02:e0c,
                                           NetUA-02:e0d, NetUA-02:e0e
8 entries were displayed.

NetUA::>

As per the above output, ua_vs1 LIF can be failover to NetUA-01’s other NIC if ant failure happens to the current network interface and It will failover to NetUA-02 if node NetUA-01 is down.

 

Let’s do the manual failover for ua_vs1’s data LIF.

NetUA::>  net int migrate -vserver ua_vs1 -lif uadata1 -dest-port e0c -dest-node NetUA-02
  (network interface migrate)

NetUA::> net int show
  (network interface show)
            Logical    Status     Network            Current       Current Is
Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
----------- ---------- ---------- ------------------ ------------- ------- ----
NetUA
            cluster_mgmt up/up    192.168.0.101/24   NetUA-01      e0f     false
NetUA-01
            clus1        up/up    169.254.81.224/16  NetUA-01      e0a     true
            clus2        up/up    169.254.220.127/16 NetUA-01      e0b     true
            mgmt1        up/up    192.168.0.91/24    NetUA-01      e0f     true
NetUA-02
            clus1        up/up    169.254.124.94/16  NetUA-02      e0a     true
            clus2        up/up    169.254.244.74/16  NetUA-02      e0b     true
            mgmt1        up/up    192.168.0.92/24    NetUA-02      e0f     true
ua_vs1
            uadata1      up/up    192.168.0.123/24   NetUA-02      e0c     false
8 entries were displayed.

NetUA::>

Here you can see that LIF has been moved from NetUA-01 to NetUA-02. You can see the “Is Home” has been set to false for data LIF.

The failover will happen in fraction of seconds. So there won’t be any impact expected. The fail-back will happen based auto-revert option.

NetUA::> network interface show -vserver ua_vs1 -lif uadata1 -fields auto-revert
vserver lif     auto-revert
------- ------- -----------
ua_vs1  uadata1 false

NetUA::>

You can modify the auto-revert flag using the following command. If Auto-revert is set to true, LIF will automatically fail-back to home node. (If the node back’s to online).

NetUA::> network interface modify  -vserver ua_vs1 -lif uadata1 -auto-revert true

NetUA::> network interface show -vserver ua_vs1 -lif uadata1 -fields auto-revert
vserver lif     auto-revert
------- ------- -----------
ua_vs1  uadata1 true

NetUA::>

You can bring the LIF back to home node using the following command.

NetUA::> network  interface  revert -vserver ua_vs1  -lif uadata1

NetUA::> net int show
  (network interface show)
            Logical    Status     Network            Current       Current Is
Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
----------- ---------- ---------- ------------------ ------------- ------- ----
NetUA
            cluster_mgmt up/up    192.168.0.101/24   NetUA-01      e0f     false
NetUA-01
            clus1        up/up    169.254.81.224/16  NetUA-01      e0a     true
            clus2        up/up    169.254.220.127/16 NetUA-01      e0b     true
            mgmt1        up/up    192.168.0.91/24    NetUA-01      e0f     true
NetUA-02
            clus1        up/up    169.254.124.94/16  NetUA-02      e0a     true
            clus2        up/up    169.254.244.74/16  NetUA-02      e0b     true
            mgmt1        up/up    192.168.0.92/24    NetUA-02      e0f     true
ua_vs1
            uadata1      up/up    192.168.0.123/24   NetUA-01      e0c     true
8 entries were displayed.

NetUA::>

 

Hope this article is informative to you . In the upcoming article , we will wee more about the Netapp’s Volumes.(Flex volume, Infinite volume & Flex-cache volumes) .

Share it ! Comment it !! Be Sociable !!!

The post NetApp – Clustered DATA ONTAP – Data Vserver (SVM)- Part 11 appeared first on UnixArena.

NetApp – Clustered DATA ONTAP – Volumes – Part 12

$
0
0

In NetApp , Volumes are used to store the system data, filesystem and clients data. By default, node’s operating system (mroot) is installed on  volume “vol0”.  Volumes are created on top of aggregates and vol0 resides on aggr0 by default. The “volume create” command creates a volume on a specified Vserver and storage aggregate. NetApp offerers there type of volumes to meet the customers requirement.

  • FlexVol Volume
  • Infinite Volume
  • FlexCache Volume

 

 

 FlexVol Volumes:

 

Clustered Data ONTAP flexible volumes are functionally equivalent to flexible volumes in the Data ONTAP 7-Mode
and the Data ONTAP 7G operating system. However, clustered Data ONTAP systems use flexible volumes differently than Data ONTAP 7-Mode and Data ONTAP 7G systems do. Because Data ONTAP clusters are inherently flexible (particularly because of the volume move capability), volumes are deployed as freely as UNIX directories and
Windows folders are deployed to separate logical groups of data.

Volumes can be created and deleted, mounted and unmounted, moved around, and backed up as needed. To take
advantage of this flexibility, cluster deployments typically use many more volumes than traditional Data ONTAP 7G
deployments use. In a high-availability ( HA) pair, aggregate and volume limits apply to each node individually, so the overall limit for the pair is effectively doubled.

NetApp SVM FlexVol
NetApp SVM FlexVol

 

Let’s create a new FlexVolume on ua_vs1 SVM(Vserver).

1. Login to the cluster management LIF as admin.

 

2. Create a new volume on aggregate “NetUA01_aggr1” in Vserver “ua_vs1” with the size of 100MB.

NetUA::> volume create -vserver ua_vs1 -volume uavol1 -aggregate NetUA01_aggr1 -size 100MB -junction-path /uavol1
[Job 105] Job succeeded: Successful

NetUA::>

 

3. List the volumes to see the newly created one.

NetUA::> vol show
  (volume show)
Vserver   Volume       Aggregate    State      Type       Size  Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
NetUA-01  vol0         aggr0_01     online     RW      851.5MB    421.0MB   50%
NetUA-02  vol0         aggr0_02     online     RW      851.5MB    420.2MB   50%
ua_vs1    ua_vs1_root  NetUA01_aggr1
                                    online     RW         20MB    18.89MB    5%
ua_vs1    uavol1       NetUA01_aggr1
                                    online     RW        100MB    94.88MB    5%
4 entries were displayed.

NetUA::>

 

4. Verify the junction path of the volume and security style.

NetUA::> vol show -vserver ua_vs1  -volume uavol1 -fields junction-path
  (volume show)
vserver volume junction-path
------- ------ -------------
ua_vs1  uavol1 /uavol1

NetUA::> vol show -vserver ua_vs1  -volume uavol1 -fields security-style
  (volume show)
vserver volume security-style
------- ------ --------------
ua_vs1  uavol1 unix

NetUA::>

 

5. To view the full detail of the volume, use the following command.

NetUA::> vol show -vserver ua_vs1  -volume uavol1
  (volume show)

                                 Vserver Name: ua_vs1
                                  Volume Name: uavol1
                               Aggregate Name: NetUA01_aggr1
                                  Volume Size: 100MB
                           Volume Data Set ID: 1026
                    Volume Master Data Set ID: 2147484674
                                 Volume State: online
                                  Volume Type: RW
                                 Volume Style: flex
                       Is Cluster-Mode Volume: true
                        Is Constituent Volume: false
                                Export Policy: default
                                      User ID: 0
                                     Group ID: 1
                               Security Style: unix
                             UNIX Permissions: ---rwxr-xr-x
                                Junction Path: /uavol1
                         Junction Path Source: RW_volume
                              Junction Active: true
                       Junction Parent Volume: ua_vs1_root
                                      Comment:
                               Available Size: 94.88MB
                              Filesystem Size: 100MB
                      Total User-Visible Size: 95MB
                                    Used Size: 124KB
                              Used Percentage: 5%
         Volume Nearly Full Threshold Percent: 95%
                Volume Full Threshold Percent: 98%
         Maximum Autosize (for flexvols only): 120MB
       Autosize Increment (for flexvols only): 5MB
                             Minimum Autosize: 100MB
           Autosize Grow Threshold Percentage: 85%
         Autosize Shrink Threshold Percentage: 50%
                                Autosize Mode: off
         Autosize Enabled (for flexvols only): false
          Total Files (for user-visible data): 3033
           Files Used (for user-visible data): 96
                        Space Guarantee Style: volume
                    Space Guarantee in Effect: true
            Snapshot Directory Access Enabled: true
                 Space Reserved for Snapshots: 5%
                        Snapshot Reserve Used: 0%
                              Snapshot Policy: default
                                Creation Time: Sat Dec 05 17:53:43 2015
                                     Language: C.UTF-8
                                 Clone Volume: false
                   Antivirus On-Access Policy: default
                                    Node name: NetUA-01
                                NVFAIL Option: off
                    Is File System Size Fixed: false
                                Extent Option: off
                Reserved Space for Overwrites: 0B
                           Fractional Reserve: 100%
                  Snapshot Cloning Dependency: off
            Primary Space Management Strategy: volume_grow
                     Read Reallocation Option: off
             Inconsistency in the File System: false
                 Is Volume Quiesced (On-Disk): false
               Is Volume Quiesced (In-Memory): false
    Volume Contains Shared or Compressed Data: false
            Space Saved by Storage Efficiency: 0B
       Percentage Saved by Storage Efficiency: 0%
                 Space Saved by Deduplication: 0B
            Percentage Saved by Deduplication: 0%
                Space Shared by Deduplication: 0B
                   Space Saved by Compression: 0B
        Percentage Space Saved by Compression: 0%
                                   Block Type: 64-bit
                  FlexCache Connection Status: -
                             Is Volume Moving: false
               Flash Pool Caching Eligibility: read-write
Flash Pool Write Caching Ineligibility Reason: -
                   Managed By Storage Service: -
Create Namespace Mirror Constituents For SnapDiff Use: -
                      Constituent Volume Role: -
                        QoS Policy Group Name: -
              Is Volume Move in Cutover Phase: false
      Number of Snapshot Copies in the Volume: 0

NetUA::>

This FlexVol volume can be export to NFS clients directly.  For CIFS clients, you need to create the shares.

 

7-Mode Volume vs C-Mode Volume:

  • C-Mode Namespaces allows to mount the volume on top of another volume.
7-Mode volume vs C-Mode Volumes
7-Mode volume vs C-Mode Volumes

 

 

  • Volumes are distributed across the node.
Distributed volumes on C-Mode
Distributed volumes on C-Mode

 

Junctions are conceptually similar to UNIX mount points. In UNIX, a disk can be divided into partitions, and then those partitions can be mounted at multiple places relative to the root of the local file system, including in a hierarchical manner. Likewise, the flexible volumes in a Data ONTAP cluster can be mounted at junction points within other volumes to form a single namespace that is distributed throughout the cluster. Although junctions appear as directories, junctions have the basic functionality of symbolic links. A volume is not visible in the namespace of its Vserver until the volume is mounted within the namespace.

 

Junction paths:

NetUA::> volume show -vserver ua_vs1 -volume * -fields junction-path
vserver volume      junction-path
------- ----------- -------------
ua_vs1  ua_vs1_root /
ua_vs1  uavol1      /uavol1
2 entries were displayed.

NetUA::>

 

If you would like to modify the junction path of the existing volume, just remount it like following.

NetUA::> volume show -vserver ua_vs1 -volume * -fields junction-path
vserver volume      junction-path
------- ----------- -------------
ua_vs1  ua_vs1_root /
ua_vs1  uavol1      /uavol1
2 entries were displayed.

NetUA::> volume unmount -vserver ua_vs1 -volume uavol1

NetUA::> volume mount -vserver ua_vs1 -volume uavol1 -junction-path /uavol1_new

NetUA::> volume show
    show           show-footprint show-space
NetUA::> volume show -vserver ua_vs1 -volume * -fields junction-path
vserver volume      junction-path
------- ----------- -------------
ua_vs1  ua_vs1_root /
ua_vs1  uavol1      /uavol1_new
2 entries were displayed.

NetUA::>

 

You need to junction path to mount the volumes on NFS clients.

Example:

root@uacloud:~# mount -t nfs 192.168.0.123:/uavol1_new /uavol1
192.168.0.123 is Vserver LIF
/uavol1_new is volume Junction Path.

 

FlexCache Volumes:

A FlexCache volume is a sparsely-populated volume on a cluster node, that is backed by a FlexVol volume. It is usually created on a different node within the cluster. A FlexCache volume provides access to data in the origin  volume without requiring that all the data be in the sparse volume. You can use only FlexVol volumes to create  FlexCache volumes. However, many of the regular FlexVol volume features are not supported on FlexCache  volumes, such as Snapshot copy creation, deduplication, compression, FlexClone volume creation, volume move, and volume copy. You can use FlexCache volumes to speed up access to data, or to offload traffic from heavily accessed volumes. FlexCache volumes help improve performance, especially when clients need to access the same data repeatedly, because the data can be served directly without having to access the source. Therefore, you can use FlexCache volumes to handle system workloads that are read-intensive. Cache consistency techniques help in ensuring that the data that is served by the FlexCache volumes remains consistent with the data in the origin volumes.

FlexCache Volume
FlexCache Volume

 

Reason to Deploy Flexcahe:

  • Decrease IO latency
  • Increase IOPS
  • Balance Resources

 

 

Flex Cache - Scenario
Flex Cache – Scenario

 

  • The cache volumes are a part of the same namespace as the origin volume.
  • An incoming file operation may be served from the cache volume or from the origin volume depending on which LIF is used for the operation. If a cache volume exists on the node containing the LIF that gets the incoming request, the operation may be served from the cache volume on that node.
  • FlexCache is suitable for work loads that are read intensive and for data that does not change frequently.
  • Pulls the data on demand
  • Just caches the data blocks requested. Unlike load sharing mirrors, FlexCache only caches the data blocks that are accessed. A cache is populated only when there is a request for the data.
  • It supports NFS v3/v4  and SMB 1.0/2.0/3.0
  • Clients are not aware of which volume (origin or FlexCache) is serving the data.

 

Let’s assume that uavol1 is experiencing some performance issue. To improve the read performance of uavol1, just create a flexcache.

1.Login to the Cluster LIF and list the volumes.

NetUA::> vol show
  (volume show)
Vserver   Volume       Aggregate    State      Type       Size  Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
NetUA-01  vol0         aggr0_01     online     RW      851.5MB    359.1MB   57%
NetUA-02  vol0         aggr0_02     online     RW      851.5MB    412.0MB   51%
ua_vs1    ua_vs1_root  NetUA01_aggr1
                                    online     RW         20MB    18.88MB    5%
ua_vs1    uavol1       NetUA01_aggr1
                                    online     RW        100MB    94.87MB    5%
6 entries were displayed.

NetUA::>

 

2.Create the flexcache volume for uavol1. The following command create the cache volumes on all the cluster nodes if you didn’t specify the node names.

NetUA::> volume flexcache create -vserver ua_vs1  -origin-volume uavol1

Successfully created cache volume "uavol1_cache_NetUA01_aggr1" in aggregate "NetUA01_aggr1".
Successfully created cache volume "uavol1_cache_NetUA01_aggr2" in aggregate "NetUA01_aggr2".
The origin volume "uavol1" is now cached on all qualifying aggregates in the cluster.

NetUA::>

 

3. Verify the newly created flex cache volumes.

NetUA::> vol show
    show           show-footprint show-space

NetUA::> vol show
  (volume show)
Vserver   Volume       Aggregate    State      Type       Size  Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
NetUA-01  vol0         aggr0_01     online     RW      851.5MB    359.4MB   57%
NetUA-02  vol0         aggr0_02     online     RW      851.5MB    412.6MB   51%
ua_vs1    ua_vs1_root  NetUA01_aggr1
                                    online     RW         20MB    18.88MB    5%
ua_vs1    uavol1       NetUA01_aggr1
                                    online     RW        100MB    94.87MB    5%
ua_vs1    uavol1_cache_NetUA01_aggr1
                       NetUA01_aggr1
                                    online     DC         20MB    19.90MB    0%
ua_vs1    uavol1_cache_NetUA01_aggr2
                       NetUA01_aggr2
                                    online     DC         20MB    19.90MB    0%
6 entries were displayed.

4. If you would like to check the active flex cache on the cluster , use the following command.

NetUA::> volume flexcache show
        --------------------Cache-------------- Conn.- -----Origin-----------
Vserver Volume Aggregate Size  State  Available Status Volume Aggregate State
------- ------ --------- ----- ------ --------- ------ ------ --------- -----
ua_vs1  uavol1_cache_NetUA01_aggr1
               NetUA01_aggr1
                          20MB online   19.90MB     ok uavol1 NetUA01_aggr1
                                                                        online
        uavol1_cache_NetUA01_aggr2
               NetUA01_aggr2
                          20MB online   19.90MB     ok uavol1 NetUA01_aggr1
                                                                        online
2 entries were displayed.

NetUA::>

 

5. To list the specific volume and related objects, use the following command.

NetUA::> volume show -vserver ua_vs1  -volume uavol1*
Vserver   Volume       Aggregate    State      Type       Size  Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
ua_vs1    uavol1       NetUA01_aggr1
                                    online     RW        100MB    94.87MB    5%
ua_vs1    uavol1_cache_NetUA01_aggr1
                       NetUA01_aggr1
                                    online     DC         20MB    19.90MB    0%
ua_vs1    uavol1_cache_NetUA01_aggr2
                       NetUA01_aggr2
                                    online     DC         20MB    19.90MB    0%
3 entries were displayed.

NetUA::>

 

6. You have option to filter the volumes using type.

NetUA::> volume show -type ?
  RW                          read-write volume
  LS                          load-sharing volume
  DP                          data-protection volume
  TMP                         temporary volume
  DC                          data-cache volume

NetUA::> volume show -vserver ua_vs1  -volume uavol1*
NetUA::> volume show -type  DC
Vserver   Volume       Aggregate    State      Type       Size  Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
ua_vs1    uavol1_cache_NetUA01_aggr1
                       NetUA01_aggr1
                                    online     DC         20MB    19.90MB    0%
ua_vs1    uavol1_cache_NetUA01_aggr2
                       NetUA01_aggr2
                                    online     DC         20MB    19.90MB    0%
2 entries were displayed.

NetUA::> volume show -type  RW
Vserver   Volume       Aggregate    State      Type       Size  Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
NetUA-01  vol0         aggr0_01     online     RW      851.5MB    359.0MB   57%
NetUA-02  vol0         aggr0_02     online     RW      851.5MB    411.7MB   51%
ua_vs1    ua_vs1_root  NetUA01_aggr1
                                    online     RW         20MB    18.88MB    5%
ua_vs1    uavol1       NetUA01_aggr1
                                    online     RW        100MB    94.87MB    5%
4 entries were displayed.

NetUA::>

 

7.Flex Cache volumes uses the same name space of Flex-volume (original volume)

NetUA::> volume show -fields junction-path uavol1*
vserver volume junction-path
------- ------ -------------
ua_vs1  uavol1 /uavol1_new
ua_vs1  uavol1_cache_NetUA01_aggr1
               /uavol1_new
ua_vs1  uavol1_cache_NetUA01_aggr2
               /uavol1_new
3 entries were displayed.

NetUA::>
NetUA::>

 

8. To see the flex cache-policy for all the Vservers, you need to gain the advanced privileges.

NetUA::> set advanced

Warning: These advanced commands are potentially dangerous; use them only when directed to do so by NetApp personnel.
Do you want to continue? {y|n}: y

NetUA::*> volume show
    show           show-footprint show-space
NetUA::*> volume flexcache
    cache-policy create       delete       show

NetUA::*> volume flexcache cache-policy show
          Policy     File   Dir    Meta   Symbol Other  Delegation  Prefer
Vserver   Name       TTL    TTL    TTL    TTL    TTL    LRU timeout LocalCache
--------- ---------- ------ ------ ------ ------ ------ ----------- ----------
ua_vs1    default    0      0      15     0      0      3600        false

NetUA::*>

 

INFINITE  VOLUME:

SVMs with Infinite Volume can contain only one infinite volume to serve data. Each SVM with Infinite Volume includes only one junction path, which has a default value of /NS. The junction provides a single mount point for the large namespace provided by the SVM with Infinite Volume. You cannot add more junctions to an SVM with Infinite Volume. However, you can increase the size of the infinite volume. SVMs with Infinite Volume can contain only files. They provide file-level data access by using NFS and CIFS (SMB 1.0) protocols. SVMs with Infinite Volume cannot contain LUNs and do not provide block-level data access.

 

NetApp Create the Infinite Volume
NetApp Create the Infinite Volume

 

Namespace mirror is a type of data protection mirror. It is not a load-sharing or FlexCache device. The namespace mirror is not an active namespace constituent. It cannot serve the incoming requests until it is promoted to a namespace constituent, in case the namespace constituent is not available.

 

Infinite Volume - Namespace
Infinite Volume – Namespace

 

Let’s Configure the Infinite volume.

1.Login to the cluster LIF  as admin

2.Create  a Vserver to host an infinite volume.

NetUA::*> vserver create -vserver infisvm -rootvolume infisvm_root -aggregate NetUA01_aggr2 -ns-switch file -nm-switch file -rootvolume-security-style unix -language C -is-repository true
[Job 165] Job succeeded:                                                                                                                             Vserver creation completed
NetUA::*>
NetUA::*> vserver show
                    Admin     Root                  Name    Name
Vserver     Type    State     Volume     Aggregate  Service Mapping
----------- ------- --------- ---------- ---------- ------- -------
NetUA       admin   -         -          -          -       -
NetUA-01    node    -         -          -          -       -
NetUA-02    node    -         -          -          -       -
infisvm     data    running   infisvm_   NetUA01_   file    file
                              root       aggr2
ua_vs1      data    running   ua_vs1_    NetUA01_   file    file
                              root       aggr1
5 entries were displayed.

NetUA::*> vserver show infisvm

                                    Vserver: infisvm
                               Vserver Type: data
                               Vserver UUID: 30bd38e7-9d13-11e5-b3cd-123478563412
                                Root Volume: infisvm_root
                                  Aggregate: NetUA01_aggr2
                        Name Service Switch: file
                        Name Mapping Switch: file
                                 NIS Domain: -
                 Root Volume Security Style: unix
                                LDAP Client: -
               Default Volume Language Code: C
                            Snapshot Policy: default-1weekly
                                    Comment:
                 Antivirus On-Access Policy: repos_disabled_antivirus_onaccess_policy
                               Quota Policy: default
                List of Aggregates Assigned: -
 Limit on Maximum Number of Volumes allowed: unlimited
                        Vserver Admin State: running
                          Allowed Protocols: nfs, cifs
                       Disallowed Protocols: fcp, iscsi, ndmp
            Is Vserver with Infinite Volume: true
                           QoS Policy Group: -

NetUA::*>

 

3. Create a 6-GB infinite volume. You must be in “Advanced” privileges mode to create the infinite volume.

NetUA::*> vol create -vserver infisvm -volume bigvol1 -size 2gb -junction-path /bigvol1 -state online -policy repos_namespace_export_policy -data-aggr-list NetUA01_aggr2,NetUA01_aggr1
  (volume create)
[Job 168] Job succeeded: Created Infinite Volume successfully.

NetUA::*>

NetUA::*> vol show -vserver infisvm
  (volume show)
Vserver   Volume       Aggregate    State      Type       Size  Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
infisvm   bigvol1      -            online     RW          2GB     1.90GB    5%
infisvm   infisvm_root NetUA01_aggr2
                                    online     RW         20MB    18.88MB    5%
2 entries were displayed.

NetUA::*>

 

4. You can see more information about the infinitie volume when you switch to the diag mode.

NetUA::*> set diag

Warning: These diagnostic commands are for use by NetApp personnel only.
Do you want to continue? {y|n}: y

NetUA::*> vol show -vserver infisvm
  (volume show)
Vserver   Volume       Aggregate    State      Type       Size  Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
infisvm   bigvol1      -            online     RW          2GB     1.90GB    5%
infisvm   bigvol1_1024_data0001
                       NetUA01_aggr1
                                    online     RW        768MB    729.5MB    5%
infisvm   bigvol1_1024_data0002
                       NetUA01_aggr2
                                    online     RW        768MB    729.5MB    5%
infisvm   bigvol1_ns   NetUA01_aggr1
                                    online     RW        256MB    243.1MB    5%
infisvm   bigvol1_ns_mirror0001
                       NetUA01_aggr2
                                    online     DP        256MB    243.1MB    5%
infisvm   infisvm_root NetUA01_aggr2
                                    online     RW         20MB    18.86MB    5%
6 entries were displayed.


NetUA::*> vol show -vserver infisvm  -fields is-constituent,constituent-role
  (volume show)
vserver volume  is-constituent constituent-role
------- ------- -------------- ----------------
infisvm bigvol1 false          -
infisvm bigvol1_1024_data0001
                true           data
infisvm bigvol1_1024_data0002
                true           data
infisvm bigvol1_ns
                true           namespace
infisvm bigvol1_ns_mirror0001
                true           ns_mirror
infisvm infisvm_root
                false          -
6 entries were displayed.

NetUA::*> vol show -vserver infisvm  -fields aggregate,size
  (volume show)
vserver volume  aggregate size
------- ------- --------- ----
infisvm bigvol1 -         2GB
infisvm bigvol1_1024_data0001
                NetUA01_aggr1
                          768MB
infisvm bigvol1_1024_data0002
                NetUA01_aggr2
                          768MB
infisvm bigvol1_ns
                NetUA01_aggr1
                          256MB
infisvm bigvol1_ns_mirror0001
                NetUA01_aggr2
                          256MB
infisvm infisvm_root
                NetUA01_aggr2
                          20MB
6 entries were displayed.

NetUA::*>

 

5. Verify the junction path for the infinite volume.

NetUA::> vol show -vserver infisvm -volume bigvol1 -fields junction-path
  (volume show)
vserver volume  junction-path
------- ------- -------------
infisvm bigvol1 /bigvol1

NetUA::>

 

6.In an order to access the Infinite volume hosting Vserver, you must create the LIF for that.

NetUA::> net int create -vserver infisvm  -lif infisvmlif -role data -home-node NetUA-02  -home-port e0d -address 192.168.0.124 -netmask 255.255.255.0
  (network interface create)

Info: Your interface was created successfully; the routing group d192.168.0.0/24 was created

NetUA::> net int show
  (network interface show)
            Logical    Status     Network            Current       Current Is
Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
----------- ---------- ---------- ------------------ ------------- ------- ----
NetUA
            cluster_mgmt up/up    192.168.0.101/24   NetUA-02      e0f     false
NetUA-01
            clus1        up/up    169.254.81.224/16  NetUA-01      e0a     true
            clus2        up/up    169.254.220.127/16 NetUA-01      e0b     true
            mgmt1        up/up    192.168.0.91/24    NetUA-01      e0f     true
NetUA-02
            clus1        up/up    169.254.124.94/16  NetUA-02      e0a     true
            clus2        up/up    169.254.244.74/16  NetUA-02      e0b     true
            mgmt1        up/up    192.168.0.92/24    NetUA-02      e0f     true
infisvm
            infisvmlif   up/up    192.168.0.124/24   NetUA-02      e0d     true
ua_vs1
            uadata1      up/up    192.168.0.123/24   NetUA-01      e0c     true
9 entries were displayed.

NetUA::>

 

7. Add the default router for the Vserver “infisvm”.

NetUA::> net routing-groups route create -vserver infisvm  -routing-group d192.168.0.0/24 -destination 0.0.0.0/0 -gateway 192.168.0.1
  (network routing-groups route create)

NetUA::> net routing-groups route show
  (network routing-groups route show)
          Routing
Vserver   Group     Destination     Gateway         Metric
--------- --------- --------------- --------------- ------
NetUA
          c192.168.0.0/24
                    0.0.0.0/0       192.168.0.1     20
NetUA-01
          n192.168.0.0/24
                    0.0.0.0/0       192.168.0.1     10
NetUA-02
          n192.168.0.0/24
                    0.0.0.0/0       192.168.0.1     10
infisvm
          d192.168.0.0/24
                    0.0.0.0/0       192.168.0.1     20
ua_vs1
          d192.168.0.0/24
                    0.0.0.0/0       192.168.0.1     20
5 entries were displayed.

NetUA::>

You should be able to access the SVM “infisvm” from the NFS or CIFS clients . To mount the infinite volume , you must configure the export policy. We will see more about export policy in up coming article.

Hope this article is informative to you. Share it ! Comment it !! Be Sociable !!!.

The post NetApp – Clustered DATA ONTAP – Volumes – Part 12 appeared first on UnixArena.

NetApp – Clustered DATA ONTAP – Configure Export policy – Part 13

$
0
0

Export polices are used to restrict the NFS/CIFS access to the volumes to clients that match specific parameters. Export polices contains one or more rules that process each client access request . A Vserver can contain multiple export polices and each volume can be associate with desired export polices to provide the access to the clients.  By default each Vserver with flex volume has a default export policy that contains no rules.  When you create a Vserver with FlexVol volume, the SVM(Vserver) automatically creates a default export policy called “default” for the root volume of the Vserver. You must create one or more rules for the default export policy before clients can access data on the Vserver. Alternatively, you can create a custom export policy with rules. You can modify and rename the default export policy, but you cannot delete the default export policy.

You must have VServer and Volumes to assign the export policy.

 

Let’s create the new export policy and assign to the existing volumes.

1. Login to the cluster LIF as admin user.

2. List the existing data Vserver.

NetUA::> vserver show -type data
                    Admin     Root                  Name    Name
Vserver     Type    State     Volume     Aggregate  Service Mapping
----------- ------- --------- ---------- ---------- ------- -------
infisvm     data    running   infisvm_   NetUA01_   file    file
                              root       aggr2
ua_vs1      data    running   ua_vs1_    NetUA01_   file    file
                              root       aggr1
2 entries were displayed.

NetUA::>

 

3.List the data volumes from the existing data Vserver.

NetUA::> volume show -vserver ua_vs1,infisvm -type RW
Vserver   Volume       Aggregate    State      Type       Size  Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
infisvm   bigvol1      -            online     RW          2GB     1.90GB    5%
infisvm   infisvm_root NetUA01_aggr2
                                    online     RW         20MB    18.87MB    5%
ua_vs1    ua_vs1_root  NetUA01_aggr1
                                    online     RW         20MB    18.88MB    5%
ua_vs1    uavol1       NetUA01_aggr1
                                    online     RW        100MB    94.87MB    5%
4 entries were displayed.

NetUA::>

 

4. Check the existing export policy. “infisvm” policies are created during the Vserver creation  since it’s hosting infinite volume.

NetUA::> export-policy show
  (vserver export-policy show)
Vserver          Policy Name
---------------  -------------------
infisvm          default
infisvm          repos_namespace_export_policy
infisvm          repos_restricted_export_policy
infisvm          repos_root_readonly_export_policy
ua_vs1           default
5 entries were displayed.

NetUA::>

 

5. Let’s create the new export policy for Vserver “ua_vs1” .

NetUA::> export-policy create -vserver ua_vs1 -policyname uavspol1
  (vserver export-policy create)

NetUA::>
NetUA::> export-policy show -vserver ua_vs1
  (vserver export-policy show)
Vserver          Policy Name
---------------  -------------------
ua_vs1           default
ua_vs1           uavspol1
2 entries were displayed.

NetUA::>

 

6. Create the new rule for “uavspol1” policy.

NetUA::> export-policy rule create -vserver ua_vs1 -policyname uavspol1 -clientmatch 0.0.0.0/0.0 -rorule any -rwrule any -allow-suid true
  (vserver export-policy rule create)

NetUA::> export-policy rule show -vserver ua_vs1
  (vserver export-policy rule show)
             Policy          Rule    Access   Client                RO
Vserver      Name            Index   Protocol Match                 Rule
------------ --------------- ------  -------- --------------------- ---------
ua_vs1       uavspol1        1       any      0.0.0.0/0.0           any

NetUA::>

 

To create the rule for specific host , use the following command .

NetUA::> export-policy rule create -vserver ua_vs1 -policyname uavspol1 -clientmatch 192.168.0.150 -rorule any -rwrule any -allow-suid true
  (vserver export-policy rule create)

NetUA::> export-policy rule show -vserver ua_vs1                                                                                                       (vserver export-policy rule show)
             Policy          Rule    Access   Client                RO
Vserver      Name            Index   Protocol Match                 Rule
------------ --------------- ------  -------- --------------------- ---------
ua_vs1       uavspol1        1       any      0.0.0.0/0.0           any
ua_vs1       uavspol1        2       any      192.168.0.150         any
2 entries were displayed.

NetUA::>

You can add N-number of clients by adding rules.

 

7. Apply the policy to the Vserver ua_vs1’s volumes.

NetUA::> vol show -vserver ua_vs1 -type rw
  (volume show)
Vserver   Volume       Aggregate    State      Type       Size  Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
ua_vs1    ua_vs1_root  NetUA01_aggr1
                                    online     RW         20MB    18.88MB    5%
ua_vs1    uavol1       NetUA01_aggr1
                                    online     RW        100MB    94.86MB    5%
2 entries were displayed.

NetUA::> 
NetUA::> vol modify -vserver ua_vs1 -policy uavspol1 -volume uavol1
  (volume modify)

Volume modify successful on volume: uavol1
NetUA::>

These following information required to mount the volume on NFS clients .

 

Find the “ua_vs1” LIF IP address to mount the volume on NFS client.

NetUA::> net int show -vserver ua_vs1
  (network interface show)
            Logical    Status     Network            Current       Current Is
Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
----------- ---------- ---------- ------------------ ------------- ------- ----
ua_vs1
            uadata1      up/up    192.168.0.123/24   NetUA-01      e0c     true

NetUA::>

 

Find the junction path for volume uavol1.

NetUA::> volume show -vserver ua_vs1 -volume uavol1 -fields junction-path
vserver volume junction-path
------- ------ -------------
ua_vs1  uavol1 /uavol1_new

NetUA::>

 

 

Mount the Volume on Linux Client:

1.Login to the linux host.

2.Try to mount the uavol1 volume.

root@uacloud:~# mount -t nfs 192.168.0.123:/uavol1_new /uavol1
mount.nfs: access denied by server while mounting 192.168.0.123:/uavol1_new
root@uacloud:~#

Error: mount.nfs: access denied by server while mounting XXX.XXX.XXX.XXX:/volume_name.

Most of the time , you will face this issue when you are not setting the policy to the Vserver root volume.

 

Just login to the Cluster LIF as admin and set the policy for Vserver root volume too.

NetUA::> vol modify -vserver ua_vs1 -policy uavspol1 -volume ua_vs1_root
  (volume modify)

Volume modify successful on volume: ua_vs1_root

NetUA::> 
NetUA::> volume show -vserver ua_vs1 -volume ua_vs1_root -fields policy
vserver volume      policy
------- ----------- --------
ua_vs1  ua_vs1_root uavspol1

NetUA::>

 

Try to mount the volume “uavol1”  again.

root@uacloud:~# mount -t nfs 192.168.0.123:/uavol1_new /uavol1
root@uacloud:~# df -h /uavol1
Filesystem                 Size  Used Avail Use% Mounted on
192.168.0.123:/uavol1_new   95M  128K   95M   1% /uavol1
root@uacloud:~#

Success!!! We have successfully mounted the volume on Linux host.

 

Hope this article is informative to you . Share it ! Comment it !! Be Sociable !!!

The post NetApp – Clustered DATA ONTAP – Configure Export policy – Part 13 appeared first on UnixArena.

NetApp – Clustered DATA ONTAP – SAN – Part 14

$
0
0

NetApp is a unified storage. It supports both NAS and SAN protocols. SAN is a block-based storage system that uses FC, Fibre Channel over Ethernet (FCoE), and iSCSI  protocols to make data available over the network. Starting with the Data ONTAP 8.1 operating system, clustered Data ONTAP systems began supporting SANs on clusters of up to four nodes. In the Data ONTAP 8.2 operating system, SAN is supported in clusters of up to eight nodes.

 

NetApp Unified Storage
NetApp Unified Storage

 

NetApp supports the following protocols .

  1. FC
  2. FCoE
  3. iSCSI

 

NetApp SAN Protocols
NetApp SAN Protocols

 

In Clustered Data ONTAP,

  • NAS  supports  up to 12 HA pair and supports NFS,pNFS and CIFS.
  • SAN supports up to 4 HA pairs and supports FC, FCoE and iSCSI.

 

Typical FC Network – NetApp SAN Environment: 

There are multiple ways to connect the initiators and targets together. Which is the best? Answer: Depends on your  architectural requirements. Usually in an enterprise environment, switches are used to provide connections to the host  initiators and storage targets.

NetApp FC Network
NetApp FC Network

 

Typical iSCSI Network – NetApp SAN Environment: 

NetApp iSCSI Environment
NetApp iSCSI Environment

 

How to configure the iSCSI vServer and  Create the LUNs ?

1.Login to the Cluster LIF as admin user.

2.Create a new Vserver for iSCSI.

NetUA::> vserver create -vserver uaiscsi -rootvolume uaiscsi_root -aggregate NetUA01_aggr1 -ns-switch file -rootvolume-security-style unix
[Job 186] Job succeeded:
Vserver creation completed

NetUA::>

 

3. Allow only iSCSI protocol to this Vserver.

NetUA::> vserver modify -vserver uaiscsi  -allowed-protocols iscsi

 

4. Review the new dedicated iSCSI Vserver.

NetUA::> vserver show uaiscsi

                                    Vserver: uaiscsi
                               Vserver Type: data
                               Vserver UUID: f1f7f244-9dee-11e5-b3cd-123478563412
                                Root Volume: uaiscsi_root
                                  Aggregate: NetUA01_aggr1
                        Name Service Switch: file
                        Name Mapping Switch: file
                                 NIS Domain: -
                 Root Volume Security Style: unix
                                LDAP Client: -
               Default Volume Language Code: C.UTF-8
                            Snapshot Policy: default
                                    Comment:
                 Antivirus On-Access Policy: default
                               Quota Policy: default
                List of Aggregates Assigned: -
 Limit on Maximum Number of Volumes allowed: unlimited
                        Vserver Admin State: running
                          Allowed Protocols: iscsi
                       Disallowed Protocols: nfs, cifs, fcp, ndmp
            Is Vserver with Infinite Volume: false
                           QoS Policy Group: -

NetUA::>

 

5. Create a new Data LIF for “uaiscsi” SVM. List the available interface and pick the “data” role interface for iSCSI traffic.

NetUA::> net port show
  (network port show)
                                      Auto-Negot  Duplex     Speed (Mbps)
Node   Port   Role         Link   MTU Admin/Oper  Admin/Oper Admin/Oper
------ ------ ------------ ---- ----- ----------- ---------- ------------
NetUA-01
       e0a    cluster      up    1500  true/true  full/full   auto/1000
       e0b    cluster      up    1500  true/true  full/full   auto/1000
       e0c    data         up    1500  true/true  full/full   auto/1000
       e0d    data         up    1500  true/true  full/full   auto/1000
       e0e    data         up    1500  true/true  full/full   auto/1000
       e0f    node-mgmt    up    1500  true/true  full/full   auto/1000
NetUA-02
       e0a    cluster      up    1500  true/true  full/full   auto/1000
       e0b    cluster      up    1500  true/true  full/full   auto/1000
       e0c    data         up    1500  true/true  full/full   auto/1000
       e0d    data         up    1500  true/true  full/full   auto/1000
       e0e    data         up    1500  true/true  full/full   auto/1000
       e0f    node-mgmt    up    1500  true/true  full/full   auto/1000
12 entries were displayed.

NetUA::>
NetUA::> net int create -vserver uaiscsi  -lif uaiscsi1 -role data -home-node NetUA-01  -home-port e0e -address 192.168.0.131 -netmask 255.255.255.0 -data-protocol iscsi
  (network interface create)
NetUA::> net int show
  (network interface show)
            Logical    Status     Network            Current       Current Is
Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
----------- ---------- ---------- ------------------ ------------- ------- ----
NetUA
            cluster_mgmt up/up    192.168.0.101/24   NetUA-01      e0c     true
NetUA-01
            clus1        up/up    169.254.81.224/16  NetUA-01      e0a     true
            clus2        up/up    169.254.220.127/16 NetUA-01      e0b     true
            mgmt1        up/up    192.168.0.91/24    NetUA-01      e0f     true
NetUA-02
            clus1        up/up    169.254.124.94/16  NetUA-02      e0a     true
            clus2        up/up    169.254.244.74/16  NetUA-02      e0b     true
            mgmt1        up/up    192.168.0.92/24    NetUA-02      e0f     true
infisvm
            infisvmlif   up/up    192.168.0.124/24   NetUA-01      e0d     false
ua_vs1
            uadata1      up/up    192.168.0.123/24   NetUA-01      e0c     true
uaiscsi
            uaiscsi1     up/up    192.168.0.131/24   NetUA-01      e0e     true
10 entries were displayed.
NetUA::>
NetUA::>

 

6.configure the static route for Vserver “uaiscsi” .

NetUA::> net routing-groups route create -vserver uaiscsi   -routing-group d192.168.0.0/24 -destination 0.0.0.0/0 -gateway 192.168.0.1
  (network routing-groups route create)

NetUA::>

7.Verify the route for “uaiscsi” .

NetUA::> net routing-groups route show
  (network routing-groups route show)
          Routing
Vserver   Group     Destination     Gateway         Metric
--------- --------- --------------- --------------- ------
NetUA
          c192.168.0.0/24
                    0.0.0.0/0       192.168.0.1     20
NetUA-01
          n192.168.0.0/24
                    0.0.0.0/0       192.168.0.1     10
NetUA-02
          n192.168.0.0/24
                    0.0.0.0/0       192.168.0.1     10
infisvm
          d192.168.0.0/24
                    0.0.0.0/0       192.168.0.1     20
ua_vs1
          d192.168.0.0/24
                    0.0.0.0/0       192.168.0.1     20
uaiscsi
          d192.168.0.0/24
                    0.0.0.0/0       192.168.0.1     20
6 entries were displayed.

8. Create the volume to provision the LUN.

NetUA::> volume create -vserver uaiscsi -volume netvol1 -aggregate NetUA01_aggr1 -size 2G
[Job 188] Job succeeded: Successful

NetUA::>

 

9. Create the new LUN on volume “Netvol1” .

NetUA::> lun create -vserver uaiscsi -volume netvol1 -lun lun0 -size 1GB -ostype linux  -space-reserve disabled

Created a LUN of size 1g (1073741824)

NetUA::>
NetUA::> lun show -vserver uaiscsi
Vserver   Path                            State   Mapped   Type        Size
--------- ------------------------------- ------- -------- -------- --------
uaiscsi   /vol/netvol1/lun0               online  unmapped linux         1GB

NetUA::>

We have successfully created a LUN on NetApp Storage.

 

10. Create the iSCSI target.

   
NetUA::> vserver iscsi create -vserver uaiscsi -target-alias uaiscsi -status-admin up

NetUA::> vserver iscsi show
           Target                           Target                       Status
Vserver    Name                             Alias                        Admin
---------- -------------------------------- ---------------------------- ------
uaiscsi    iqn.1992-08.com.netapp:sn.f1f7f2449dee11e5b3cd123478563412:vs.8
                                            uaiscsi                      up

NetUA::>

 

11. Create a new portset for Vserver “uaiscsi”.

NetUA::> portset create -vserver uaiscsi  -portset uaiscsips -protocol iscsi
NetUA::> portset show -vserver uaiscsi -portset uaiscsips

    Vserver Name: uaiscsi
    Portset Name: uaiscsips
        LIF Name: -
        Protocol: iscsi
 Number Of Ports: 0
Bound To Igroups: -

NetUA::>

 

12. Check the LIF name of “uaiscsi” Vserver and map the LIF to portset.

NetUA::> net int show uaiscsi1
  (network interface show)
            Logical    Status     Network            Current       Current Is
Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
----------- ---------- ---------- ------------------ ------------- ------- ----
uaiscsi
            uaiscsi1     up/up    192.168.0.131/24   NetUA-01      e0e     true

NetUA::>
NetUA::> portset add -vserver uaiscsi -portset uaiscsips -port-name uaiscsi1
NetUA::>
NetUA::> portset show -vserver uaiscsi -portset uaiscsips

    Vserver Name: uaiscsi
    Portset Name: uaiscsips
        LIF Name: uaiscsi1
        Protocol: iscsi
 Number Of Ports: 1
Bound To Igroups: 
NetUA::>

 

13. Create a initiator group and map newly created portset “uaiscsips” to it.

NetUA::> igroup create -vserver uaiscsi -igroup uaiscsi3 -protocol iscsi -ostype linux -initiator - -portset uaiscsips
NetUA::> portset show -vserver uaiscsi
Vserver   Portset      Protocol Port Names              Igroups
--------- ------------ -------- ----------------------- ------------
uaiscsi   uaiscsips    iscsi    uaiscsi1                uaiscsi3

NetUA::>

 

14. Check the initiator groups status.

NetUA::> igroup show -vserver uaiscsi
Vserver   Igroup       Protocol OS Type  Initiators
--------- ------------ -------- -------- ------------------------------------
uaiscsi   uaiscsi3     iscsi    linux    -

 

15. Map the LUN to initiator group.

NetUA::> lun map -vserver uaiscsi -path /vol/netvol1/lun0 -igroup uaiscsi3

NetUA::> lun show -vserver uaiscsi
Vserver   Path                            State   Mapped   Type        Size
--------- ------------------------------- ------- -------- -------- --------
uaiscsi   /vol/netvol1/lun0               online  mapped   linux         1GB

NetUA::>

 

16. Login to the Linux host and get the initiator name. (In which, you would like to provision the LUN’s)

root@uacloud:~# grep InitiatorName= /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1993-08.org.debian:01:21a0d3d79b9f
root@uacloud:~#

 

17. Add the initiator to igroup.

NetUA::> igroup add -vserver uaiscsi -igroup uaiscsi3 -initiator iqn.1993-08.org.debian:01:21a0d3d79b9f
NetUA::> igroup show -vserver uaiscsi
Vserver   Igroup       Protocol OS Type  Initiators
--------- ------------ -------- -------- ------------------------------------
uaiscsi   uaiscsi3     iscsi    linux    iqn.1993-08.org.debian:01:21a0d3d79b9f

NetUA::>

 

Configure the iSCSI initiator on Linux Host (Ubuntu):

1. Login to the Linux host.

2. Install iscsi package .

root@uacloud:~# apt-get install open-iscsi

 

3.Try to access the iSCSI target.

root@uacloud:~# iscsiadm -m discovery -t st -p 192.168.0.131
iscsiadm: cannot make connection to 192.168.0.131: Connection refused
iscsiadm: cannot make connection to 192.168.0.131: Connection refused
iscsiadm: cannot make connection to 192.168.0.131: Connection refused
iscsiadm: cannot make connection to 192.168.0.131: Connection refused
iscsiadm: cannot make connection to 192.168.0.131: Connection refused
iscsiadm: cannot make connection to 192.168.0.131: Connection refused
iscsiadm: connection login retries (reopen_max) 5 exceeded
iscsiadm: Could not perform SendTargets discovery: encountered connection failure
root@uacloud:~#

IP Address – NetApp Vserver LIF . (Refer step 12).

If you are not configured the portset and initiator target group properly, you may get errors like above.
(iscsiadm: cannot make connection to IP_Address: Connection refused)

If you have configured correctly,you can see like following.

root@uacloud:~# iscsiadm -m discovery -t st -p 192.168.0.131
192.168.0.131:3260,1030 iqn.1992-08.com.netapp:sn.f1f7f2449dee11e5b3cd123478563412:vs.8
root@uacloud:~#

 

4.Login to the iSCSI target using the following command.

root@uacloud:~# iscsiadm -m node --targetname "iqn.1992-08.com.netapp:sn.f1f7f2449dee11e5b3cd123478563412:vs.8"  --portal "192.168.0.131:3260" --login
Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.f1f7f2449dee11e5b3cd123478563412:vs.8, portal: 192.168.0.131,3260] (multiple)
Login to [iface: default, target: iqn.1992-08.com.netapp:sn.f1f7f2449dee11e5b3cd123478563412:vs.8, portal: 192.168.0.131,3260] successful.
root@uacloud:~#

–targername – Refer step 10
Portal – NetApp Vserver LIF . (Refer step 12).

 

5. If you assign the new LUN’s to the existing target , you can scan the iSCSI connection using following commands.

root@uacloud:~# iscsiadm -m session
tcp: [6] 192.168.0.131:3260,1030 iqn.1992-08.com.netapp:sn.f1f7f2449dee11e5b3cd123478563412:vs.8
root@uacloud:~# iscsiadm -m session --sid=6 --rescan
Rescanning session [sid: 6, target: iqn.1992-08.com.netapp:sn.f1f7f2449dee11e5b3cd123478563412:vs.8, portal: 192.168.0.131,3260]

Or use the following to re-scan all the iSCSI session.

root@uacloud:~# iscsiadm -m session --rescan
Rescanning session [sid: 6, target: iqn.1992-08.com.netapp:sn.f1f7f2449dee11e5b3cd123478563412:vs.8, portal: 192.168.0.131,3260]
root@uacloud:~#

 

6.Use the fdisk command to find the new disks.

 

7. In my case, it is /dev/sdc.

root@uacloud:~# fdisk -l /dev/sdc 
Disk /dev/sdc: 1073 MB, 1073741824 bytes
34 heads, 61 sectors/track, 1011 cylinders, total 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disk identifier: 0x00000000

We have successfully provisioned the NetApp iSCSI LUN to Linux host.

Hope this article is informative to you. Share it ! Comment it !! Be Sociable !!!

The post NetApp – Clustered DATA ONTAP – SAN – Part 14 appeared first on UnixArena.

NetApp – Clustered DATA ONTAP – Tutorial

$
0
0

NetApp  Clustered Data ONTAP is one of the emerging technology in the storage market. Cloud technologies demonstrated that how to reduce the IT operation cost by utilizing  commodity hardware. This was the wake call for many of the proprietary hardware vendors in the market to provide the cheapest and reliable solution to the customer.  So NetApp has wake of so early and decided to change the complete storage architecture from 7-Mode to Clustered Mode. NetApp Cluster Mode ONTAP works in distributed storage model and thanks to spinnaker networks for this solution  (Spinnaker networks was acquired by NetApp).

NetApp  Clustered Data ONTAP is designed to meet the current demands and future cloud solutions.

This tutorial’s targeted audience are system administrators and storage administrators  who have prior knowledge on other SAN/NAS products.

1. First article talks about the NetApp Clustered Data ONTAP features.

2.The second article talks about NetApp Clustered Data ONTAP Objects and Components. 

3.This articles explains about NetApp’s FAS series models and features.

4.This article explains about the NetApp’s Read operations in brief manner.

5. Go through this article if you would like to know that how the write operations happens on NetApp CDoT.

 

You need to follow the below listed articles in serial. Because, If you skip one article, you might not get the required objects to perform the tasks.

6. This article explains that how to configure the two node cluster on NetApp C-Mode operating system – NetApp Clustered ONTAP.

 

7. This article talks about the License Management on NetApp C-Mode. You need to buy the required NetApp Licenses to enable certain  features.

 

8.NetApp Clustered Data ONTAP shells Overview.  You must know the difference about the NetApp shell.

 

9.Configure the NetApp Clustered Data ONTAP – Storage Aggregate.

 

10.Configure the storage failover between HA pairs nodes.

 

11.Configure the NetApp Data Vserver and LIF’s.

 

12. Learn more about the NetApp Volumes (FlexVol , Infinite volume,FlexCache Volume).

 

13.Configure the export policy for NAS. This article will demonstrates that how to mount the NFS shares on Linux system.

14. NetApp is an Unified storage. This article will explorer more about SAN. It also demonstrates that how to provision the LUN and assign to Linux host.

Hope this tutorial will help you to  start the journey on NetApp Clustered ONTAP.

In  the NetApp Advanced tutorial ,we will see about

  • NetApp Volume Snapshots
  • NetApp Snap-mirror
  • NetApp Clone
  • Performance Troubleshooting

Thanks to NetApp.

Share it ! comment it !! Be Sociable !!!

The post NetApp – Clustered DATA ONTAP – Tutorial appeared first on UnixArena.

How to Patch/Update RHEL 7 without internet connection ?

$
0
0

Linux is not a windows operating system to install the security patches  and other bug fix patches for every week. At the same time , it’s not like an Unix  operating system where you no need to patch it for years. You should plan to patch the Redhat Linux yearly twice to eliminate the security holes and bug fixes. Redhat recommends to connect the systems to their repository to update the system without much pain. But  customers don’t want to keep the systems in direct internet facing for any reason. Some of the customers will use internet proxy service to get the systems connected with Redhat repository and some of them are rich to afford Redhat satellite server  facility. What about the companies who are concerned about security and cost ? Redhat provides the options of those people to update the system using offline method.

This article is going to demonstrate the offline patching method for RHEL 7. Redhat will automatically upgrade to the minor version.

Operating System: RHEL 7.0

 

Full OS update:(Package update , kernel update and security update)

Note: In this method, whole operating system will be upgraded to the latest minor versions. In this case, system will upgrade to RHEL 7.2.
1. You must have valid redhat subscription to download the latest DVD from Redhat.

2.Download the latest Redhat Enterprise Linux Server 7.x (RHEL 7.x Binary DVD) ISO from Redhat portal.

3.Copy the RHEL 7.x Binary ISO to the system which you want to update(patch)  it.

4. Mount the ISO .

[root@UA-HA ~]# mkdir /repo
[root@UA-HA ~]# mount -o loop rhel-server-7.2-x86_64-dvd.iso /repo
[root@UA-HA ~]# ls -lrt /repo
total 872
-r--r--r--.  1 root root  18092 Mar  6  2012 GPL
-r--r--r--.  1 root root   8266 Apr  4  2014 EULA
-r--r--r--.  1 root root   3211 Oct 23 09:25 RPM-GPG-KEY-redhat-release
-r--r--r--.  1 root root   3375 Oct 23 09:25 RPM-GPG-KEY-redhat-beta
-r--r--r--.  1 root root    114 Oct 30 10:54 media.repo
-r--r--r--.  1 root root   1568 Oct 30 11:03 TRANS.TBL
dr-xr-xr-x.  2 root root   4096 Oct 30 11:03 repodata
dr-xr-xr-x. 24 root root   6144 Oct 30 11:03 release-notes
dr-xr-xr-x.  2 root root 835584 Oct 30 11:03 Packages
dr-xr-xr-x.  2 root root   2048 Oct 30 11:03 LiveOS
dr-xr-xr-x.  2 root root   2048 Oct 30 11:03 isolinux
dr-xr-xr-x.  3 root root   2048 Oct 30 11:03 images
dr-xr-xr-x.  3 root root   2048 Oct 30 11:03 EFI
dr-xr-xr-x.  4 root root   2048 Oct 30 11:03 addons
[root@UA-HA ~]#

 

5. Check the current version of Redhat and kernel version.

[root@UA-HA ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.0 (Maipo)
[root@UA-HA ~]# uname -mrs
Linux 3.10.0-123.el7.x86_64 x86_64
[root@UA-HA ~]#

 

6.Remove the existing yum repository. (Re-configure it later if you need those)

 

7.Create the new repo file in “/etc/yum.repos.d/”

[root@UA-HA yum.repos.d]# cat /etc/yum.repos.d/ua.repo
[repo]
gpgcheck=0
enabled=1
baseurl=file:///repo
name=repo-update
[root@UA-HA yum.repos.d]#

 

8.List the newly created repo.

[root@UA-HA yum.repos.d]# yum repolist
Loaded plugins: langpacks, product-id, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
repo id                                                                       repo name                                                                        status
repo                                                                          repo-update                                                                        4,305
repolist: 4,305
[root@UA-HA yum.repos.d]# cd
[root@UA-HA ~]#

 

9. Clean the cache,dbcache, expire-cache, headers and metadata. Perform the repo metadata clean up.

[root@UA-HA ~]# yum clean all
Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Cleaning repos: repo-update
Cleaning up everything
[root@UA-HA ~]#

 

10. Update the system using “yum update” command.

[root@UA-HA ~]# yum update -y
Loaded plugins: langpacks, product-id, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Resolving Dependencies
--> Running transaction check
---> Package ModemManager-glib.x86_64 0:1.1.0-6.git20130913.el7 will be updated
---> Package ModemManager-glib.x86_64 0:1.1.0-8.git20130913.el7 will be an update
---> Package NetworkManager.x86_64 1:0.9.9.1-13.git20140326.4dba720.el7 will be obsoleted
---> Package NetworkManager.x86_64 1:1.0.6-27.el7 will be obsoleting
--> Processing Dependency: NetworkManager-libnm(x86-64) = 1:1.0.6-27.el7 for package: 1:NetworkManager-1.0.6-27.el7.x86_64
--> Processing Dependency: libnm.so.0(libnm_1_0_0)(64bit) for package: 1:NetworkManager-1.0.6-27.el7.x86_64
^C[root@UA-HA ~]#

 

11. Reboot the system using init 6.

 

12. Login to the system and check the kernel version.

[root@UA-HA ~]# uname -mrs
Linux 3.10.0-327.el7.x86_64 x86_64
[root@UA-HA ~]#

 

13. Check the /etc/redhat-release file.

[root@UA-HA ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.2 (Maipo)
[root@UA-HA ~]#

 

We can see that system has been updated successfully.

 

System Packages Bug fix, Security updates  & Enhancement Updates Only: (No Kernel Update)

Some of the customers would like to stay with same kernel but would like to update the bug fixes and security updates. In that case, you can simply exclude the kernel.

There are two ways to exclude the kernel update.
Method :1
Update the /etc/yum.conf to exclude the kernel update permanently.

[root@UA-HA ~]# cat /etc/yum.conf |grep -i exclude
#Exclude kernel update
exclude=kernel*
[root@UA-HA ~]#

 

Run yum update command to update the system.

[root@UA-HA ~]# yum update -y

 

Method:2

While updating the system , you can just use the exclude option.

[root@UA-HA ~]# yum update --exclude=kernel*
Loaded plugins: langpacks, product-id, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Resolving Dependencies
--> Running transaction check
---> Package ModemManager-glib.x86_64 0:1.1.0-6.git20130913.el7 will be updated
---> Package ModemManager-glib.x86_64 0:1.1.0-8.git20130913.el7 will be an update
---> Package NetworkManager.x86_64 1:0.9.9.1-13.git20140326.4dba720.el7 will be obsoleted
---> Package NetworkManager.x86_64 1:1.0.6-27.el7 will be obsoleting

 

Only Kernel Update:

1. List the available kernel updates .

[root@UA-HA yum.repos.d]# yum list updates 'kernel*'
Loaded plugins: langpacks, product-id, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Updated Packages
kernel.x86_64                                                                     3.10.0-327.el7                                                          repo-update
kernel-tools.x86_64                                                               3.10.0-327.el7                                                          repo-update
kernel-tools-libs.x86_64                                                          3.10.0-327.el7                                                          repo-update
[root@UA-HA yum.repos.d]#

 

2. Check the currently installed kernel.

[root@UA-HA yum.repos.d]# rpm -q kernel
kernel-3.10.0-123.el7.x86_64
[root@UA-HA yum.repos.d]# yum list installed 'kernel*'
Loaded plugins: langpacks, product-id, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Installed Packages
kernel.x86_64 3.10.0-123.el7 @anaconda/7.0
kernel-tools.x86_64 3.10.0-123.el7 @anaconda/7.0
kernel-tools-libs.x86_64 3.10.0-123.el7 @anaconda/7.0
[root@UA-HA yum.repos.d]#

 

3. Update only the system kernel.

[root@UA-HA ~]# yum update 'kernel*'
Loaded plugins: langpacks, product-id, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Resolving Dependencies
--> Running transaction check
---> Package kernel.x86_64 0:3.10.0-327.el7 will be installed
--> Processing Dependency: dracut >= 033-283 for package: kernel-3.10.0-327.el7.x86_64
--> Processing Dependency: linux-firmware >= 20150904-43 for package: kernel-3.10.0-327.el7.x86_64
---> Package kernel-tools.x86_64 0:3.10.0-123.el7 will be updated
---> Package kernel-tools.x86_64 0:3.10.0-327.el7 will be an update
---> Package kernel-tools-libs.x86_64 0:3.10.0-123.el7 will be updated
---> Package kernel-tools-libs.x86_64 0:3.10.0-327.el7 will be an update
--> Running transaction check
---> Package dracut.x86_64 0:033-161.el7 will be updated
--> Processing Dependency: dracut = 033-161.el7 for package: dracut-network-033-161.el7.x86_64
--> Processing Dependency: dracut = 033-161.el7 for package: dracut-config-rescue-033-161.el7.x86_64
---> Package dracut.x86_64 0:033-359.el7 will be an update
--> Processing Dependency: systemd >= 219 for package: dracut-033-359.el7.x86_64
---> Package libertas-sd8686-firmware.noarch 0:20140213-0.3.git4164c23.el7 will be obsoleted
---> Package libertas-sd8787-firmware.noarch 0:20140213-0.3.git4164c23.el7 will be obsoleted
---> Package libertas-usb8388-firmware.noarch 2:20140213-0.3.git4164c23.el7 will be obsoleted
---> Package linux-firmware.noarch 0:20140213-0.3.git4164c23.el7 will be updated
---> Package linux-firmware.noarch 0:20150904-43.git6ebf5d5.el7 will be obsoleting
--> Running transaction check
---> Package dracut-config-rescue.x86_64 0:033-161.el7 will be updated
---> Package dracut-config-rescue.x86_64 0:033-359.el7 will be an update
---> Package dracut-network.x86_64 0:033-161.el7 will be updated
---> Package dracut-network.x86_64 0:033-359.el7 will be an update
---> Package systemd.x86_64 0:208-11.el7 will be updated
--> Processing Dependency: systemd = 208-11.el7 for package: libgudev1-208-11.el7.x86_64
--> Processing Dependency: systemd = 208-11.el7 for package: systemd-python-208-11.el7.x86_64
--> Processing Dependency: systemd = 208-11.el7 for package: systemd-sysv-208-11.el7.x86_64
---> Package systemd.x86_64 0:219-19.el7 will be an update
--> Processing Dependency: systemd-libs = 219-19.el7 for package: systemd-219-19.el7.x86_64
--> Processing Dependency: kmod >= 18-4 for package: systemd-219-19.el7.x86_64
--> Running transaction check
---> Package kmod.x86_64 0:14-9.el7 will be updated
---> Package kmod.x86_64 0:20-5.el7 will be an update
---> Package libgudev1.x86_64 0:208-11.el7 will be updated
---> Package libgudev1.x86_64 0:219-19.el7 will be an update
---> Package systemd-libs.x86_64 0:208-11.el7 will be updated
---> Package systemd-libs.x86_64 0:219-19.el7 will be an update
---> Package systemd-python.x86_64 0:208-11.el7 will be updated
---> Package systemd-python.x86_64 0:219-19.el7 will be an update
---> Package systemd-sysv.x86_64 0:208-11.el7 will be updated
---> Package systemd-sysv.x86_64 0:219-19.el7 will be an update
--> Processing Conflict: systemd-219-19.el7.x86_64 conflicts initscripts < 9.49.28-1 --> Restarting Dependency Resolution with new changes.
--> Running transaction check
---> Package initscripts.x86_64 0:9.49.17-1.el7 will be updated
---> Package initscripts.x86_64 0:9.49.30-1.el7 will be an update
--> Finished Dependency Resolution

Dependencies Resolved

=================================================================================================================
 Package         Arch                         Version          Repository                         Size
=================================================================================================================
Installing:
 kernel                                     x86_64                       3.10.0-327.el7                                      repo-update                        33 M
 linux-firmware                             noarch                       20150904-43.git6ebf5d5.el7                          repo-update                        24 M
     replacing  libertas-sd8686-firmware.noarch 20140213-0.3.git4164c23.el7
     replacing  libertas-sd8787-firmware.noarch 20140213-0.3.git4164c23.el7
     replacing  libertas-usb8388-firmware.noarch 2:20140213-0.3.git4164c23.el7
Updating:
 initscripts                                x86_64                       9.49.30-1.el7                                       repo-update                       429 k
 kernel-tools                               x86_64                       3.10.0-327.el7                                      repo-update                       2.4 M
 kernel-tools-libs                          x86_64                       3.10.0-327.el7                                      repo-update                       2.3 M
Updating for dependencies:
 dracut                                     x86_64                       033-359.el7                                         repo-update                       311 k
 dracut-config-rescue                       x86_64                       033-359.el7                                         repo-update                        49 k
 dracut-network                             x86_64                       033-359.el7                                         repo-update                        90 k
 kmod                                       x86_64                       20-5.el7                                            repo-update                       114 k
 libgudev1                                  x86_64                       219-19.el7                                          repo-update                        64 k
 systemd                                    x86_64                       219-19.el7                                          repo-update                       5.1 M
 systemd-libs                               x86_64                       219-19.el7                                          repo-update                       356 k
 systemd-python                             x86_64                       219-19.el7                                          repo-update                        97 k
 systemd-sysv                               x86_64                       219-19.el7                                          repo-update                        52 k

Transaction Summary
==================================================================================================================
Install  2 Packages
Upgrade  3 Packages (+9 Dependent packages)

Total download size: 68 M
Is this ok [y/d/N]: y
Downloading packages:
------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                 86 MB/s |  68 MB  00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Warning: RPMDB altered outside of yum.
  Updating   : systemd-libs-219-19.el7.x86_64                                                                                                                   1/30
  Updating   : systemd-219-19.el7.x86_64                                                                                                                        2/30
  Updating   : dracut-033-359.el7.x86_64                                                                                                                        3/30
  Updating   : kmod-20-5.el7.x86_64                                                                                                                             4/30
  Updating   : initscripts-9.49.30-1.el7.x86_64                                                                                                                 5/30
  Updating   : kernel-tools-libs-3.10.0-327.el7.x86_64                                                                                                          6/30
  Installing : linux-firmware-20150904-43.git6ebf5d5.el7.noarch                                                                                                 7/30
  Installing : kernel-3.10.0-327.el7.x86_64                                                                                                                     8/30
  Updating   : kernel-tools-3.10.0-327.el7.x86_64                                                                                                               9/30
  Updating   : dracut-config-rescue-033-359.el7.x86_64                                                                                                         10/30
  Updating   : dracut-network-033-359.el7.x86_64                                                                                                               11/30
  Updating   : systemd-sysv-219-19.el7.x86_64                                                                                                                  12/30
  Updating   : systemd-python-219-19.el7.x86_64                                                                                                                13/30
  Updating   : libgudev1-219-19.el7.x86_64                                                                                                                     14/30
  Cleanup    : systemd-sysv-208-11.el7.x86_64                                                                                                                  15/30
  Cleanup    : dracut-network-033-161.el7.x86_64                                                                                                               16/30
  Cleanup    : dracut-config-rescue-033-161.el7.x86_64                                                                                                         17/30
  Erasing    : libertas-sd8787-firmware-20140213-0.3.git4164c23.el7.noarch                                                                                     18/30
  Erasing    : libertas-sd8686-firmware-20140213-0.3.git4164c23.el7.noarch                                                                                     19/30
  Erasing    : 2:libertas-usb8388-firmware-20140213-0.3.git4164c23.el7.noarch                                                                                  20/30
  Cleanup    : linux-firmware-20140213-0.3.git4164c23.el7.noarch                                                                                               21/30
  Cleanup    : dracut-033-161.el7.x86_64                                                                                                                       22/30
  Cleanup    : systemd-python-208-11.el7.x86_64                                                                                                                23/30
  Cleanup    : initscripts-9.49.17-1.el7.x86_64                                                                                                                24/30
  Cleanup    : libgudev1-208-11.el7.x86_64                                                                                                                     25/30
  Cleanup    : systemd-208-11.el7.x86_64                                                                                                                       26/30
  Cleanup    : kernel-tools-3.10.0-123.el7.x86_64                                                                                                              27/30
  Cleanup    : kernel-tools-libs-3.10.0-123.el7.x86_64                                                                                                         28/30
  Cleanup    : kmod-14-9.el7.x86_64                                                                                                                            29/30
  Cleanup    : systemd-libs-208-11.el7.x86_64                                                                                                                  30/30
  Verifying  : dracut-config-rescue-033-359.el7.x86_64                                                                                                          1/30
  Verifying  : linux-firmware-20150904-43.git6ebf5d5.el7.noarch                                                                                                 2/30
  Verifying  : dracut-network-033-359.el7.x86_64                                                                                                                3/30
  Verifying  : kernel-tools-3.10.0-327.el7.x86_64                                                                                                               4/30
  Verifying  : kmod-20-5.el7.x86_64                                                                                                                             5/30
  Verifying  : systemd-sysv-219-19.el7.x86_64                                                                                                                   6/30
  Verifying  : libgudev1-219-19.el7.x86_64                                                                                                                      7/30
  Verifying  : systemd-219-19.el7.x86_64                                                                                                                        8/30
  Verifying  : kernel-3.10.0-327.el7.x86_64                                                                                                                     9/30
  Verifying  : dracut-033-359.el7.x86_64                                                                                                                       10/30
  Verifying  : systemd-libs-219-19.el7.x86_64                                                                                                                  11/30
  Verifying  : kernel-tools-libs-3.10.0-327.el7.x86_64                                                                                                         12/30
  Verifying  : initscripts-9.49.30-1.el7.x86_64                                                                                                                13/30
  Verifying  : systemd-python-219-19.el7.x86_64                                                                                                                14/30
  Verifying  : kernel-tools-3.10.0-123.el7.x86_64                                                                                                              15/30
  Verifying  : kmod-14-9.el7.x86_64                                                                                                                            16/30
  Verifying  : dracut-config-rescue-033-161.el7.x86_64                                                                                                         17/30
  Verifying  : systemd-sysv-208-11.el7.x86_64                                                                                                                  18/30
  Verifying  : systemd-python-208-11.el7.x86_64                                                                                                                19/30
  Verifying  : libertas-sd8787-firmware-20140213-0.3.git4164c23.el7.noarch                                                                                     20/30
  Verifying  : 2:libertas-usb8388-firmware-20140213-0.3.git4164c23.el7.noarch                                                                                  21/30
  Verifying  : dracut-033-161.el7.x86_64                                                                                                                       22/30
  Verifying  : initscripts-9.49.17-1.el7.x86_64                                                                                                                23/30
  Verifying  : systemd-libs-208-11.el7.x86_64                                                                                                                  24/30
  Verifying  : systemd-208-11.el7.x86_64                                                                                                                       25/30
  Verifying  : dracut-network-033-161.el7.x86_64                                                                                                               26/30
  Verifying  : libertas-sd8686-firmware-20140213-0.3.git4164c23.el7.noarch                                                                                     27/30
  Verifying  : libgudev1-208-11.el7.x86_64                                                                                                                     28/30
  Verifying  : linux-firmware-20140213-0.3.git4164c23.el7.noarch                                                                                               29/30
  Verifying  : kernel-tools-libs-3.10.0-123.el7.x86_64                                                                                                         30/30

Installed:
  kernel.x86_64 0:3.10.0-327.el7                                          linux-firmware.noarch 0:20150904-43.git6ebf5d5.el7

Updated:
  initscripts.x86_64 0:9.49.30-1.el7                 kernel-tools.x86_64 0:3.10.0-327.el7                 kernel-tools-libs.x86_64 0:3.10.0-327.el7

Dependency Updated:
  dracut.x86_64 0:033-359.el7          dracut-config-rescue.x86_64 0:033-359.el7     dracut-network.x86_64 0:033-359.el7     kmod.x86_64 0:20-5.el7
  libgudev1.x86_64 0:219-19.el7        systemd.x86_64 0:219-19.el7                   systemd-libs.x86_64 0:219-19.el7        systemd-python.x86_64 0:219-19.el7
  systemd-sysv.x86_64 0:219-19.el7

Replaced:
  libertas-sd8686-firmware.noarch 0:20140213-0.3.git4164c23.el7                     libertas-sd8787-firmware.noarch 0:20140213-0.3.git4164c23.el7
  libertas-usb8388-firmware.noarch 2:20140213-0.3.git4164c23.el7

Complete!
[root@UA-HA ~]#

 

4. Reboot the system.

In grub , you can see that system is booting in new kernel.

GRUB Menu - RHEL 7
GRUB Menu – RHEL 7

 

5. Login to the system again and check the kernel version.

[root@UA-HA ~]# uname -mrs
Linux 3.10.0-327.el7.x86_64 x86_64
[root@UA-HA ~]#

We can see that system kernel has been upgraded to latest version.

 

Install Only the Security updates: (No update for kernel & packages)

Use the following command to update only the security updates.

# yum -y update --security

Refer the Redhat support article for more information.

 

Hope this article is informative to you. Share it ! Comment it !! Be Sociable !!!

The post How to Patch/Update RHEL 7 without internet connection ? appeared first on UnixArena.

Kernel Based Virtual Machine (KVM) – Overview Part 1

$
0
0

KVM is free open source full virtualization solution for Linux on x86 hardware. After the cloud revolution, KVM(Kernel Based Virtual Machine) Virtualization is a hot topic in the industry. Most of the cloud technologies will prefer to use the KVM hypervisors over XEN due to it’s simplicity. Redhat and Ubuntu’s default hypervisor is KVM. Contrast to these vendors, Oracle Linux uses XEN Virtualization. More information about KVM can be obtain from linux-kvm.org.

KVM consists of a loadable kernel module, kvm.ko, that provides the core virtualization infrastructure and a processor specific module, kvm-intel.ko (Intel) or kvm-amd.ko (AMD). These modules  allows kernel to become a hypervisor.  kvm.ko kernel module  is  responsible to exposing “/dev/kvm” which is used by various programs including libvirt.

KVM has initially developed by Qumranet that was acquired by Red Hat in 2008.

 

Prerequisites for KVM:

1. Processors with virtualization technology.  Accelerate the virtualization guests.

  • Intel  – Intel-VT
  • AMD – AMD-V (SVM)

2. Enable the CPU VT technology on the BIOS.
3. Linux kernel must be greater than 2.6.20  .
4.  Access to the repository to install the necessary KVM packages.
5. Shared Storage. (NFS, SAN , NAS)

 

Supported Guests on KVM:

  • Linux  – Most of the linux flavours are supported
  • Windows  – Most of the windows guests are supported including desktops and servers.
  • Unix – BSD , Solaris

 

Supported Architecture:
KVM supports both 32-bit & 64-bit guest operating systems. To host the 64-Bit guests , the host system should be 64-bit and VT enabled.

 

KVM Maximums:

Redhat RHEL KVM Maximums
Redhat RHEL KVM Maximums

 

Maximum number of concurrently running virtual guests is 4 . But it is not a KVM limitation. Redhat protect the number of virtual guests using the license methods.

 

Redhat unlimited Virtual Licenses: (15/12/2015)

Redhat Support cost
Redhat Support cost

 

KVM new features:

 

Paravirtualization  vs HVM (Native Hardware Virtualization)

KVM supports paravirtualization typically supports the following components.

  • Networking
  • Block Devices
  • Graphics
  • Memory

Paravirtualization supported on Linux, BSD  and windows guests. It improves the performances considerably compare to HVM.

 

Networking:
KVM supports the following network features.
NAT: – NAT provides the outbound network access to the KVM guests. In other words, guest machine can access the outside world but external network system can’t reach the guests. This is the default network for KVM.
Bridges: – Bridge provide the access to public and private network. KVM guests can be access from outside the host machine unlike NAT.

 
KVM Environment:
The KVM environment is maintained in /var/lib/libvirt. This includes the ISO images for the installation , actual VM guest images and network configurations.

[root@UA-HA libvirt]# ls -lrt
total 4
drwx------. 2 root root    6 Oct  8 09:14 lxc
drwx--x--x. 2 root root    6 Oct  8 09:14 images
drwx--x--x. 2 root root    6 Oct  8 09:14 filesystems
drwx--x--x. 2 root root    6 Oct  8 09:14 boot
drwx------. 2 root root    6 Dec 13 00:30 network
drwxr-x--x. 7 qemu qemu   69 Dec 13 00:30 qemu
drwxr-xr-x. 2 root root 4096 Dec 13 13:12 dnsmasq
[root@UA-HA libvirt]#

 

KVM configuration files are stored in /etc/libvirt/  directory.

[root@UA-HA libvirt]# cd /etc/libvirt/
[root@UA-HA libvirt]# ls -lrt
total 56
-rw-r--r--. 1 root root  2134 Oct  8 09:14 virtlockd.conf
-rw-r--r--. 1 root root  2169 Oct  8 09:14 qemu-lockd.conf
-rw-r--r--. 1 root root 18987 Oct  8 09:14 qemu.conf
drwx------. 3 root root    21 Oct  8 09:14 qemu
-rw-r--r--. 1 root root  1176 Oct  8 09:14 lxc.conf
-rw-r--r--. 1 root root   518 Oct  8 09:14 libvirt.conf
-rw-r--r--. 1 root root 15070 Oct  8 09:14 libvirtd.conf
drwx------. 2 root root  4096 Dec 13 00:25 nwfilter
[root@UA-HA libvirt]#

 

Hope this article is informative to you. Share it ! Comment it !! Be Sociable !!!

The post Kernel Based Virtual Machine (KVM) – Overview Part 1 appeared first on UnixArena.


Redhat Enterprise Linux – KVM Installation Part 2

$
0
0

This article will demonstrates that how to install the KVM (kernel-based virtual Machines) packages on Redhat enterprise Linux 7.2 . We must have  “yum repository” to install the KVM packages and it’s dependencies. There are many  GUI tools available to manage the KVM.  VMM (Virtual Machine Manager) is famous GUI tool which was developed by Redhat. The following table provides the KVM package names and key role of each packages. “virt-manager” and “virt-install” can be installed on any of the Linux host to manage the KVM hypervisor hosts.

KVM is a Linux kernel module that allows a user space program access to the hardware virtualization features of Intel and AMD processors. With the helf of  KVM kernel modules, guests  run as ordinary user-space processes. KVM uses QEMU for I/O hardware emulation. QEMU is a user-space emulator that can emulate a variety of guest processors on host processors with decent performance. Using the KVM kernel module allows it to approach native speeds. KVM is managed via the libvirt API and tools. (Example: virsh, virtinstall and virt-clone )

 

Package Name Description
qemu-kvm Provides kvm.ko & kvm_intel kernel Modules . Core part  of the KVM
qemu-kvm-common Various BIOS and Network scripts .
qemu-img Disk image manager on the host Red Hat Enterprise Linux system.
bridge-utils Provides the bridges to pyshcial interfaces and VM interfaces
virt-manager GUI to manage the KVM guests. Allow users to interactive with libvirtd and
kernel to provide the faciltiy to create the virtual guests
virt-install It’s key CLI package. Provides the binaries “virt-install” , “virt-clone”, “virt-image” and “virt-convert”.
libvirt The libvirt package provides the libvirtd daemon that handles the library calls,
manages virtual machines and controls the hypervisor.
libvirt-python The libvirt-python package contains a module that permits applications written in the Python
programming language to use the interface supplied by the libvirt API.
libvirt-client The libvirt-client package provides the client-side APIs and libraries for accessing libvirt servers.
The libvirt-client package includes the virsh command line tool to manage and control virtual
machines and hypervisors from the command line or a special virtualization shell.
libguestfs-tools libguestfs is a set of tools for accessing and modifying virtual machine (VM) disk images. It can access alomost any disk image including Vmware’s VMDK & Hyper-V disk format.

 

1. Check the host processors VT technology. If you don’t find vmx or svm in cpuinfo, verify that Virtualization Technology (VT) is enabled in your server’s BIOS.

[root@UA-HA ~]# grep -E 'svm|vmx' /proc/cpuinfo
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable nonstop_tsc aperfmperf eagerfpu pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm ida arat epb pln pts dtherm tpr_shadow vnmi ept vpid fsgsbase smep xsaveopt
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable nonstop_tsc aperfmperf eagerfpu pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm ida arat epb pln pts dtherm tpr_shadow vnmi ept vpid fsgsbase smep xsaveopt
[root@UA-HA ~]#

  • vmx is for Intel processors
  • svm is for AMD processors

 

2.Install the KVM and KVM tools. (Ignore if the packages are already installed.)

[root@UA-HA ~]#  yum install qemu-kvm libvirt libvirt-python libguestfs-tools bridge-utils virt-install
Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Package 10:qemu-kvm-1.5.3-105.el7.x86_64 already installed and latest version
Package libvirt-python-1.2.17-2.el7.x86_64 already installed and latest version
Package virt-install-1.2.1-8.el7.noarch already installed and latest version
Resolving Dependencies
--> Running transaction check
---> Package libguestfs-tools.noarch 1:1.28.1-1.55.el7 will be installed
--> Processing Dependency: libguestfs-tools-c = 1:1.28.1-1.55.el7 for package: 1:libguestfs-tools-1.28.1-1.55.el7.noarch
--> Processing Dependency: perl(Sys::Virt) for package: 1:libguestfs-tools-1.28.1-1.55.el7.noarch
--> Processing Dependency: perl(Locale::TextDomain) for package: 1:libguestfs-tools-1.28.1-1.55.el7.noarch
--> Processing Dependency: perl(Sys::Guestfs) for package: 1:libguestfs-tools-1.28.1-1.55.el7.noarch
---> Package libvirt.x86_64 0:1.2.17-13.el7 will be installed
--> Running transaction check
---> Package libguestfs-tools-c.x86_64 1:1.28.1-1.55.el7 will be installed
--> Processing Dependency: /usr/bin/hexedit for package: 1:libguestfs-tools-c-1.28.1-1.55.el7.x86_64
---> Package perl-Sys-Guestfs.x86_64 1:1.28.1-1.55.el7 will be installed
---> Package perl-Sys-Virt.x86_64 0:1.2.17-2.el7 will be installed
---> Package perl-libintl.x86_64 0:1.20-12.el7 will be installed
--> Running transaction check
---> Package hexedit.x86_64 0:1.2.13-5.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=====================================================================================================================================================================
 Package                                     Arch                            Version                                      Repository                            Size
=====================================================================================================================================================================
Installing:
 libguestfs-tools                            noarch                          1:1.28.1-1.55.el7                            repo-update                          108 k
 libvirt                                     x86_64                          1.2.17-13.el7                                repo-update                          116 k
Installing for dependencies:
 hexedit                                     x86_64                          1.2.13-5.el7                                 repo-update                           39 k
 libguestfs-tools-c                          x86_64                          1:1.28.1-1.55.el7                            repo-update                          2.2 M
 perl-Sys-Guestfs                            x86_64                          1:1.28.1-1.55.el7                            repo-update                          370 k
 perl-Sys-Virt                               x86_64                          1.2.17-2.el7                                 repo-update                          263 k
 perl-libintl                                x86_64                          1.20-12.el7                                  repo-update                          875 k

Transaction Summary
=====================================================================================================================================================================
Install  2 Packages (+5 Dependent packages)

Total download size: 3.9 M
Installed size: 18 M
Is this ok [y/d/N]: y
Downloading packages:
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                 26 MB/s | 3.9 MB  00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : perl-Sys-Virt-1.2.17-2.el7.x86_64                                                                                                                 1/7
  Installing : perl-libintl-1.20-12.el7.x86_64                                                                                                                   2/7
  Installing : hexedit-1.2.13-5.el7.x86_64                                                                                                                       3/7
  Installing : 1:libguestfs-tools-c-1.28.1-1.55.el7.x86_64                                                                                                       4/7
  Installing : 1:perl-Sys-Guestfs-1.28.1-1.55.el7.x86_64                                                                                                         5/7
  Installing : 1:libguestfs-tools-1.28.1-1.55.el7.noarch                                                                                                         6/7
  Installing : libvirt-1.2.17-13.el7.x86_64                                                                                                                      7/7
  Verifying  : 1:perl-Sys-Guestfs-1.28.1-1.55.el7.x86_64                                                                                                         1/7
  Verifying  : 1:libguestfs-tools-c-1.28.1-1.55.el7.x86_64                                                                                                       2/7
  Verifying  : hexedit-1.2.13-5.el7.x86_64                                                                                                                       3/7
  Verifying  : 1:libguestfs-tools-1.28.1-1.55.el7.noarch                                                                                                         4/7
  Verifying  : perl-libintl-1.20-12.el7.x86_64                                                                                                                   5/7
  Verifying  : perl-Sys-Virt-1.2.17-2.el7.x86_64                                                                                                                 6/7
  Verifying  : libvirt-1.2.17-13.el7.x86_64                                                                                                                      7/7

Installed:
  libguestfs-tools.noarch 1:1.28.1-1.55.el7                                              libvirt.x86_64 0:1.2.17-13.el7

Dependency Installed:
  hexedit.x86_64 0:1.2.13-5.el7       libguestfs-tools-c.x86_64 1:1.28.1-1.55.el7   perl-Sys-Guestfs.x86_64 1:1.28.1-1.55.el7   perl-Sys-Virt.x86_64 0:1.2.17-2.el7
  perl-libintl.x86_64 0:1.20-12.el7

Complete!
[root@UA-HA ~]#

 

2. Enable and start the libvirtd service.

[root@UA-HA ~]#  systemctl enable libvirtd && systemctl start libvirtd
[root@UA-HA ~]# systemctl status libvirtd
● libvirtd.service - Virtualization daemon
   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
   Active: active (running) since Sun 2015-12-13 13:12:43 EST; 13h ago
     Docs: man:libvirtd(8)
           http://libvirt.org
 Main PID: 1565 (libvirtd)
   CGroup: /system.slice/libvirtd.service
           ├─1565 /usr/sbin/libvirtd
           ├─5307 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
           └─5309 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper

Dec 13 13:12:45 UA-HA dnsmasq[5307]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth
Dec 13 13:12:45 UA-HA dnsmasq-dhcp[5307]: DHCP, IP range 192.168.122.2 -- 192.168.122.254, lease time 1h
Dec 13 13:12:45 UA-HA dnsmasq[5307]: reading /etc/resolv.conf
Dec 13 13:12:45 UA-HA dnsmasq[5307]: using nameserver 127.0.0.1#53
Dec 13 13:12:45 UA-HA dnsmasq[5307]: read /etc/hosts - 2 addresses
Dec 13 13:12:45 UA-HA dnsmasq[5307]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses
Dec 13 13:12:45 UA-HA dnsmasq-dhcp[5307]: read /var/lib/libvirt/dnsmasq/default.hostsfile
Dec 14 02:46:43 UA-HA libvirtd[1565]: libvirt version: 1.2.17, package: 13.el7 (Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>, 2015-10-08-09:11:06...edhat.com)
Dec 14 02:46:43 UA-HA libvirtd[1565]: End of file while reading data: Input/output error
Dec 14 03:00:13 UA-HA systemd[1]: Started Virtualization daemon.
Hint: Some lines were ellipsized, use -l to show in full.
[root@UA-HA ~]#

 

3.Check the kvm kernel module.

[root@UA-HA ~]# lsmod |grep -i kvm
kvm_intel             162153  0
kvm                   525259  1 kvm_intel
[root@UA-HA ~]#

 

4. virt-Manager provides the GUI to manage the KVM. You can install virt-manager anywhere either on hosts or remote systems. (This is like a vSphere Client in VMware)

[root@UA-HA ~]# yum install virt-manager
Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Resolving Dependencies
--> Running transaction check
---> Package virt-manager.noarch 0:1.2.1-8.el7 will be installed
--> Processing Dependency: libvirt-glib >= 0.0.9 for package: virt-manager-1.2.1-8.el7.noarch
--> Processing Dependency: dbus-x11 for package: virt-manager-1.2.1-8.el7.noarch
--> Processing Dependency: dconf for package: virt-manager-1.2.1-8.el7.noarch
--> Processing Dependency: gnome-icon-theme for package: virt-manager-1.2.1-8.el7.noarch
--> Processing Dependency: gtk-vnc2 for package: virt-manager-1.2.1-8.el7.noarch
--> Processing Dependency: gtk3 for package: virt-manager-1.2.1-8.el7.noarch
--> Processing Dependency: pygobject3 for package: virt-manager-1.2.1-8.el7.noarch
--> Processing Dependency: spice-gtk3 for package: virt-manager-1.2.1-8.el7.noarch
--> Processing Dependency: vte3 for package: virt-manager-1.2.1-8.el7.noarch
--> Running transaction check
---> Package dbus-x11.x86_64 1:1.6.12-13.el7 will be installed
---> Package dconf.x86_64 0:0.22.0-2.el7 will be installed
---> Package gnome-icon-theme.noarch 0:3.12.0-1.el7 will be installed
---> Package gtk-vnc2.x86_64 0:0.5.2-7.el7 will be installed
--> Processing Dependency: gvnc = 0.5.2-7.el7 for package: gtk-vnc2-0.5.2-7.el7.x86_64
--> Processing Dependency: libcairo-gobject.so.2()(64bit) for package: gtk-vnc2-0.5.2-7.el7.x86_64
--> Processing Dependency: libgvnc-1.0.so.0()(64bit) for package: gtk-vnc2-0.5.2-7.el7.x86_64
---> Package gtk3.x86_64 0:3.14.13-16.el7 will be installed
--> Processing Dependency: adwaita-icon-theme for package: gtk3-3.14.13-16.el7.x86_64
--> Processing Dependency: libatk-bridge-2.0.so.0()(64bit) for package: gtk3-3.14.13-16.el7.x86_64
--> Processing Dependency: libcolord.so.2()(64bit) for package: gtk3-3.14.13-16.el7.x86_64
--> Processing Dependency: libjson-glib-1.0.so.0()(64bit) for package: gtk3-3.14.13-16.el7.x86_64
--> Processing Dependency: librest-0.7.so.0()(64bit) for package: gtk3-3.14.13-16.el7.x86_64
---> Package libvirt-glib.x86_64 0:0.1.9-1.el7 will be installed
---> Package pygobject3.x86_64 0:3.14.0-3.el7 will be installed
--> Processing Dependency: pycairo(x86-64) for package: pygobject3-3.14.0-3.el7.x86_64
---> Package spice-gtk3.x86_64 0:0.26-5.el7 will be installed
--> Processing Dependency: spice-glib(x86-64) = 0.26-5.el7 for package: spice-gtk3-0.26-5.el7.x86_64
--> Processing Dependency: libspice-client-glib-2.0.so.8(SPICEGTK_1)(64bit) for package: spice-gtk3-0.26-5.el7.x86_64
--> Processing Dependency: libcacard.so.0()(64bit) for package: spice-gtk3-0.26-5.el7.x86_64
--> Processing Dependency: libpulse-mainloop-glib.so.0()(64bit) for package: spice-gtk3-0.26-5.el7.x86_64
--> Processing Dependency: libspice-client-glib-2.0.so.8()(64bit) for package: spice-gtk3-0.26-5.el7.x86_64
---> Package vte3.x86_64 0:0.36.4-1.el7 will be installed
--> Processing Dependency: vte-profile for package: vte3-0.36.4-1.el7.x86_64
--> Running transaction check
---> Package adwaita-icon-theme.noarch 0:3.14.1-1.el7 will be installed
--> Processing Dependency: adwaita-cursor-theme = 3.14.1-1.el7 for package: adwaita-icon-theme-3.14.1-1.el7.noarch
---> Package at-spi2-atk.x86_64 0:2.8.1-4.el7 will be installed
--> Processing Dependency: at-spi2-core >= 2.7.5 for package: at-spi2-atk-2.8.1-4.el7.x86_64
--> Processing Dependency: libatspi.so.0()(64bit) for package: at-spi2-atk-2.8.1-4.el7.x86_64
---> Package cairo-gobject.x86_64 0:1.14.2-1.el7 will be installed
---> Package colord-libs.x86_64 0:1.2.7-2.el7 will be installed
--> Processing Dependency: libgusb.so.2()(64bit) for package: colord-libs-1.2.7-2.el7.x86_64
---> Package gvnc.x86_64 0:0.5.2-7.el7 will be installed
---> Package json-glib.x86_64 0:1.0.2-1.el7 will be installed
---> Package libcacard.x86_64 10:1.5.3-105.el7 will be installed
---> Package pulseaudio-libs-glib2.x86_64 0:6.0-7.el7 will be installed
---> Package pycairo.x86_64 0:1.8.10-8.el7 will be installed
---> Package rest.x86_64 0:0.7.92-3.el7 will be installed
---> Package spice-glib.x86_64 0:0.26-5.el7 will be installed
---> Package vte-profile.x86_64 0:0.38.3-2.el7 will be installed
--> Running transaction check
---> Package adwaita-cursor-theme.noarch 0:3.14.1-1.el7 will be installed
---> Package at-spi2-core.x86_64 0:2.8.0-6.el7 will be installed
--> Processing Dependency: libXevie.so.1()(64bit) for package: at-spi2-core-2.8.0-6.el7.x86_64
---> Package libgusb.x86_64 0:0.1.6-3.el7 will be installed
--> Running transaction check
---> Package libXevie.x86_64 0:1.0.3-7.1.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=====================================================================================================================================================================
 Package                                        Arch                            Version                                   Repository                            Size
=====================================================================================================================================================================
Installing:
 virt-manager                                   noarch                          1.2.1-8.el7                               repo-update                          628 k
Installing for dependencies:
 adwaita-cursor-theme                           noarch                          3.14.1-1.el7                              repo-update                          128 k
 adwaita-icon-theme                             noarch                          3.14.1-1.el7                              repo-update                           11 M
 at-spi2-atk                                    x86_64                          2.8.1-4.el7                               repo-update                           73 k
 at-spi2-core                                   x86_64                          2.8.0-6.el7                               repo-update                          151 k
 cairo-gobject                                  x86_64                          1.14.2-1.el7                              repo-update                           25 k
 colord-libs                                    x86_64                          1.2.7-2.el7                               repo-update                          174 k
 dbus-x11                                       x86_64                          1:1.6.12-13.el7                           repo-update                           46 k
 dconf                                          x86_64                          0.22.0-2.el7                              repo-update                          157 k
 gnome-icon-theme                               noarch                          3.12.0-1.el7                              repo-update                          9.7 M
 gtk-vnc2                                       x86_64                          0.5.2-7.el7                               repo-update                           38 k
 gtk3                                           x86_64                          3.14.13-16.el7                            repo-update                          3.8 M
 gvnc                                           x86_64                          0.5.2-7.el7                               repo-update                           89 k
 json-glib                                      x86_64                          1.0.2-1.el7                               repo-update                          123 k
 libXevie                                       x86_64                          1.0.3-7.1.el7                             repo-update                           18 k
 libcacard                                      x86_64                          10:1.5.3-105.el7                          repo-update                          227 k
 libgusb                                        x86_64                          0.1.6-3.el7                               repo-update                           33 k
 libvirt-glib                                   x86_64                          0.1.9-1.el7                               repo-update                           84 k
 pulseaudio-libs-glib2                          x86_64                          6.0-7.el7                                 repo-update                           27 k
 pycairo                                        x86_64                          1.8.10-8.el7                              repo-update                          157 k
 pygobject3                                     x86_64                          3.14.0-3.el7                              repo-update                           16 k
 rest                                           x86_64                          0.7.92-3.el7                              repo-update                           62 k
 spice-glib                                     x86_64                          0.26-5.el7                                repo-update                          350 k
 spice-gtk3                                     x86_64                          0.26-5.el7                                repo-update                           51 k
 vte-profile                                    x86_64                          0.38.3-2.el7                              repo-update                          6.0 k
 vte3                                           x86_64                          0.36.4-1.el7                              repo-update                          337 k

Transaction Summary
=====================================================================================================================================================================
Install  1 Package (+25 Dependent packages)

Total download size: 28 M
Installed size: 49 M
Is this ok [y/d/N]: y
Downloading packages:
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                 47 MB/s |  28 MB  00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : cairo-gobject-1.14.2-1.el7.x86_64                                                                                                                1/26
  Installing : 10:libcacard-1.5.3-105.el7.x86_64                                                                                                                2/26
  Installing : pulseaudio-libs-glib2-6.0-7.el7.x86_64                                                                                                           3/26
  Installing : spice-glib-0.26-5.el7.x86_64                                                                                                                     4/26
  Installing : rest-0.7.92-3.el7.x86_64                                                                                                                         5/26
  Installing : vte-profile-0.38.3-2.el7.x86_64                                                                                                                  6/26
  Installing : dconf-0.22.0-2.el7.x86_64                                                                                                                        7/26
  Installing : pycairo-1.8.10-8.el7.x86_64                                                                                                                      8/26
  Installing : pygobject3-3.14.0-3.el7.x86_64                                                                                                                   9/26
  Installing : libgusb-0.1.6-3.el7.x86_64                                                                                                                      10/26
  Installing : colord-libs-1.2.7-2.el7.x86_64                                                                                                                  11/26
  Installing : json-glib-1.0.2-1.el7.x86_64                                                                                                                    12/26
  Installing : libXevie-1.0.3-7.1.el7.x86_64                                                                                                                   13/26
  Installing : at-spi2-core-2.8.0-6.el7.x86_64                                                                                                                 14/26
  Installing : at-spi2-atk-2.8.1-4.el7.x86_64                                                                                                                  15/26
  Installing : 1:dbus-x11-1.6.12-13.el7.x86_64                                                                                                                 16/26
  Installing : adwaita-cursor-theme-3.14.1-1.el7.noarch                                                                                                        17/26
  Installing : adwaita-icon-theme-3.14.1-1.el7.noarch                                                                                                          18/26
  Installing : gtk3-3.14.13-16.el7.x86_64                                                                                                                      19/26
  Installing : spice-gtk3-0.26-5.el7.x86_64                                                                                                                    20/26
  Installing : vte3-0.36.4-1.el7.x86_64                                                                                                                        21/26
  Installing : gvnc-0.5.2-7.el7.x86_64                                                                                                                         22/26
  Installing : gtk-vnc2-0.5.2-7.el7.x86_64                                                                                                                     23/26
  Installing : libvirt-glib-0.1.9-1.el7.x86_64                                                                                                                 24/26
  Installing : gnome-icon-theme-3.12.0-1.el7.noarch                                                                                                            25/26
  Installing : virt-manager-1.2.1-8.el7.noarch                                                                                                                 26/26
  Verifying  : spice-glib-0.26-5.el7.x86_64                                                                                                                     1/26
  Verifying  : gnome-icon-theme-3.12.0-1.el7.noarch                                                                                                             2/26
  Verifying  : spice-gtk3-0.26-5.el7.x86_64                                                                                                                     3/26
  Verifying  : colord-libs-1.2.7-2.el7.x86_64                                                                                                                   4/26
  Verifying  : libvirt-glib-0.1.9-1.el7.x86_64                                                                                                                  5/26
  Verifying  : pulseaudio-libs-glib2-6.0-7.el7.x86_64                                                                                                           6/26
  Verifying  : adwaita-icon-theme-3.14.1-1.el7.noarch                                                                                                           7/26
  Verifying  : gvnc-0.5.2-7.el7.x86_64                                                                                                                          8/26
  Verifying  : adwaita-cursor-theme-3.14.1-1.el7.noarch                                                                                                         9/26
  Verifying  : 1:dbus-x11-1.6.12-13.el7.x86_64                                                                                                                 10/26
  Verifying  : gtk3-3.14.13-16.el7.x86_64                                                                                                                      11/26
  Verifying  : libXevie-1.0.3-7.1.el7.x86_64                                                                                                                   12/26
  Verifying  : json-glib-1.0.2-1.el7.x86_64                                                                                                                    13/26
  Verifying  : virt-manager-1.2.1-8.el7.noarch                                                                                                                 14/26
  Verifying  : libgusb-0.1.6-3.el7.x86_64                                                                                                                      15/26
  Verifying  : at-spi2-core-2.8.0-6.el7.x86_64                                                                                                                 16/26
  Verifying  : pygobject3-3.14.0-3.el7.x86_64                                                                                                                  17/26
  Verifying  : pycairo-1.8.10-8.el7.x86_64                                                                                                                     18/26
  Verifying  : dconf-0.22.0-2.el7.x86_64                                                                                                                       19/26
  Verifying  : vte-profile-0.38.3-2.el7.x86_64                                                                                                                 20/26
  Verifying  : gtk-vnc2-0.5.2-7.el7.x86_64                                                                                                                     21/26
  Verifying  : at-spi2-atk-2.8.1-4.el7.x86_64                                                                                                                  22/26
  Verifying  : cairo-gobject-1.14.2-1.el7.x86_64                                                                                                               23/26
  Verifying  : rest-0.7.92-3.el7.x86_64                                                                                                                        24/26
  Verifying  : 10:libcacard-1.5.3-105.el7.x86_64                                                                                                               25/26
  Verifying  : vte3-0.36.4-1.el7.x86_64                                                                                                                        26/26

Installed:
  virt-manager.noarch 0:1.2.1-8.el7

Dependency Installed:
  adwaita-cursor-theme.noarch 0:3.14.1-1.el7    adwaita-icon-theme.noarch 0:3.14.1-1.el7    at-spi2-atk.x86_64 0:2.8.1-4.el7     at-spi2-core.x86_64 0:2.8.0-6.el7
  cairo-gobject.x86_64 0:1.14.2-1.el7           colord-libs.x86_64 0:1.2.7-2.el7            dbus-x11.x86_64 1:1.6.12-13.el7      dconf.x86_64 0:0.22.0-2.el7
  gnome-icon-theme.noarch 0:3.12.0-1.el7        gtk-vnc2.x86_64 0:0.5.2-7.el7               gtk3.x86_64 0:3.14.13-16.el7         gvnc.x86_64 0:0.5.2-7.el7
  json-glib.x86_64 0:1.0.2-1.el7                libXevie.x86_64 0:1.0.3-7.1.el7             libcacard.x86_64 10:1.5.3-105.el7    libgusb.x86_64 0:0.1.6-3.el7
  libvirt-glib.x86_64 0:0.1.9-1.el7             pulseaudio-libs-glib2.x86_64 0:6.0-7.el7    pycairo.x86_64 0:1.8.10-8.el7        pygobject3.x86_64 0:3.14.0-3.el7
  rest.x86_64 0:0.7.92-3.el7                    spice-glib.x86_64 0:0.26-5.el7              spice-gtk3.x86_64 0:0.26-5.el7       vte-profile.x86_64 0:0.38.3-2.el7
  vte3.x86_64 0:0.36.4-1.el7

Complete!
[root@UA-HA ~]# 


[root@UA-HA ~]#

 

5.Optionally , you can install virt-top to check the resource utilization on the hosts node.

[root@UA-HA ~]# yum install virt-top.x86_64
Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Resolving Dependencies
--> Running transaction check
---> Package virt-top.x86_64 0:1.0.8-8.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=====================================================================================================================================================================
 Package                               Arch                                Version                                    Repository                                Size
=====================================================================================================================================================================
Installing:
 virt-top                              x86_64                              1.0.8-8.el7                                repo-update                              400 k

Transaction Summary
=====================================================================================================================================================================
Install  1 Package

Total download size: 400 k
Installed size: 1.4 M
Is this ok [y/d/N]: y
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : virt-top-1.0.8-8.el7.x86_64                                                                                                                       1/1
  Verifying  : virt-top-1.0.8-8.el7.x86_64                                                                                                                       1/1

Installed:
  virt-top.x86_64 0:1.0.8-8.el7

Complete!
[root@UA-HA ~]#

 

6. Install virt-viewer to view the guest VNC console.

[root@UA-HA ~]# yum install virt-viewer
Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Resolving Dependencies
--> Running transaction check
---> Package virt-viewer.x86_64 0:2.0-6.el7 will be installed
--> Processing Dependency: libgovirt.so.2(GOVIRT_0.2.0)(64bit) for package: virt-viewer-2.0-6.el7.x86_64
--> Processing Dependency: libgovirt.so.2(GOVIRT_0.2.1)(64bit) for package: virt-viewer-2.0-6.el7.x86_64
--> Processing Dependency: libgovirt.so.2(GOVIRT_0.3.1)(64bit) for package: virt-viewer-2.0-6.el7.x86_64
--> Processing Dependency: libgovirt.so.2()(64bit) for package: virt-viewer-2.0-6.el7.x86_64
--> Running transaction check
---> Package libgovirt.x86_64 0:0.3.3-1.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

===========================================================================================================================
 Package                       Arch                     Version                        Repository                     Size
===========================================================================================================================
Installing:
 virt-viewer                   x86_64                   2.0-6.el7                      repo-update                   339 k
Installing for dependencies:
 libgovirt                     x86_64                   0.3.3-1.el7                    repo-update                    63 k

Transaction Summary
===========================================================================================================================
Install  1 Package (+1 Dependent package)

Total download size: 402 k
Installed size: 1.4 M
Is this ok [y/d/N]: y
Downloading packages:
---------------------------------------------------------------------------------------------------------------------------
Total                                                                                      5.4 MB/s | 402 kB  00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : libgovirt-0.3.3-1.el7.x86_64                                                                            1/2
  Installing : virt-viewer-2.0-6.el7.x86_64                                                                            2/2
  Verifying  : virt-viewer-2.0-6.el7.x86_64                                                                            1/2
  Verifying  : libgovirt-0.3.3-1.el7.x86_64                                                                            2/2

Installed:
  virt-viewer.x86_64 0:2.0-6.el7

Dependency Installed:
  libgovirt.x86_64 0:0.3.3-1.el7

Complete!
[root@UA-HA ~]#

We have successfully installed the kvm packages and supported tools.

Hope this article is informative to you . Share it ! Comment it !! Be Sociable !!!

The post Redhat Enterprise Linux – KVM Installation Part 2 appeared first on UnixArena.

RHEL 7.2 – Configuring KVM hosts – Part 3

$
0
0

KVM hosts needs to be prepared to store  and provide the network access to the guest machines. In last article ,we have seen KVM package installation and VMM(virtual Machine Manager) package installation. Once you have installed the packages, you need to create the filesystem to store the virtual machines images (var/lib/libvirt/images which is the default storage path). If you are planning to move the VM’s from one host to another host , you need a shared filesystem  (NFS) or shared storage (SAN). To access the guest in external network , you must configure the bridge on host. This article is going to demonstrate the bridge creation and creating the storage pool to store the virtual machines and ISO images.

  • Host – The hypervisor or physical server where all VMs are installed.
  • VMs (Virtual Machines) or Guests – Virtual servers that are installed on top of a physical server.

 

Host Operating System (Hypervisor) – RHEL 7.2

 

Configure the New Bridge on host (Hypervisor):

Bridge configuration is required to provide the autonomous network access to the guests.

1.Login to the host .

2. View the current network configuration.

[root@UA-HA ~]# ifconfig -a
eno16777736: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.203.134  netmask 255.255.255.0  broadcast 192.168.203.255
        inet6 fe80::20c:29ff:fe2d:3fce  prefixlen 64  scopeid 0x20
        ether 00:0c:29:2d:3f:ce  txqueuelen 1000  (Ethernet)
        RX packets 13147  bytes 1923365 (1.8 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 7545  bytes 784722 (766.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 0  (Local Loopback)
        RX packets 1616  bytes 385042 (376.0 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1616  bytes 385042 (376.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:16:bc:24  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0-nic: flags=4098<BROADCAST,MULTICAST>  mtu 1500
        ether 52:54:00:16:bc:24  txqueuelen 500  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@UA-HA ~]#

 

3.Re-configure the primary interface to enable the bridging. Navigate to the network configuration directory and update the “ifcfg-xxxxxx” file like following.

[root@UA-HA ~]# cd /etc/sysconfig/network-scripts/
[root@UA-HA network-scripts]# vi ifcfg-eno16777736
[root@UA-HA network-scripts]# cat ifcfg-eno16777736
HWADDR="00:0C:29:2D:3F:CE"
TYPE="Ethernet"
ONBOOT="yes"
BRIDGE=br0
[root@UA-HA network-scripts]#

 

4. Create the bridge configuration file like the following.

[root@UA-HA ~]# cd /etc/sysconfig/network-scripts/
[root@UA-HA network-scripts]# cat ifcfg-br0
TYPE="Bridge"
DEVICE=br0
BOOTPROTO="dhcp"
ONBOOT="yes"
DELAY=0
STP=0
[root@UA-HA network-scripts]#

 

5. Update the “sysctl.conf”  file to enable the IP forwarding.

[root@UA-HA ~]# grep net /etc/sysctl.conf
net.ipv4.ip_forward=1
[root@UA-HA ~]#

 

6. Run the sysctl command to activate the IP forwarding instantly.

[root@UA-HA ~]# sysctl -p
net.ipv4.ip_forward = 1
[root@UA-HA ~]#

 

7. Restart the network services to activate the bridge configuration.

[root@UA-HA ~]# systemctl restart network
[root@UA-HA ~]# systemctl status network
● network.service - LSB: Bring up/down networking
   Loaded: loaded (/etc/rc.d/init.d/network)
   Active: active (exited) since Mon 2015-12-14 06:26:08 EST; 11s ago
     Docs: man:systemd-sysv-generator(8)
  Process: 38831 ExecStop=/etc/rc.d/init.d/network stop (code=exited, status=0/SUCCESS)
  Process: 39021 ExecStart=/etc/rc.d/init.d/network start (code=exited, status=0/SUCCESS)

Dec 14 06:26:07 UA-HA systemd[1]: Starting LSB: Bring up/down networking...
Dec 14 06:26:08 UA-HA network[39021]: Bringing up loopback interface:  Could not load file '/etc/sysconfig/network-scripts/ifcfg-lo'
Dec 14 06:26:08 UA-HA network[39021]: Could not load file '/etc/sysconfig/network-scripts/ifcfg-lo'
Dec 14 06:26:08 UA-HA network[39021]: Could not load file '/etc/sysconfig/network-scripts/ifcfg-lo'
Dec 14 06:26:08 UA-HA network[39021]: Could not load file '/etc/sysconfig/network-scripts/ifcfg-lo'
Dec 14 06:26:08 UA-HA network[39021]: [  OK  ]
Dec 14 06:26:08 UA-HA network[39021]: Bringing up interface eno16777736:  [  OK  ]
Dec 14 06:26:08 UA-HA network[39021]: Bringing up interface br0:  [  OK  ]
Dec 14 06:26:08 UA-HA systemd[1]: Started LSB: Bring up/down networking.
[root@UA-HA ~]#

 

8. Verify the network configuration again.

[root@UA-HA ~]# ifconfig -a
br0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.203.134  netmask 255.255.255.0  broadcast 192.168.203.255
        inet6 fe80::20c:29ff:fe2d:3fce  prefixlen 64  scopeid 0x20
        ether 00:0c:29:2d:3f:ce  txqueuelen 0  (Ethernet)
        RX packets 104  bytes 8568 (8.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 123  bytes 12778 (12.4 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eno16777736: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether 00:0c:29:2d:3f:ce  txqueuelen 1000  (Ethernet)
        RX packets 13902  bytes 1985960 (1.8 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 8161  bytes 841989 (822.2 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 0  (Local Loopback)
        RX packets 1828  bytes 435567 (425.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1828  bytes 435567 (425.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:16:bc:24  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0-nic: flags=4098<BROADCAST,MULTICAST>  mtu 1500
        ether 52:54:00:16:bc:24  txqueuelen 500  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@UA-HA ~]#

Looks good.

 

9. Verify the bridge information.

[root@UA-HA ~]# brctl show
bridge name     bridge id               STP enabled     interfaces
br0             8000.000c292d3fce       no              eno16777736
virbr0          8000.52540016bc24       yes             virbr0-nic
[root@UA-HA ~]#

We have successfully created the bridge to provide the access to the guests.

 

 

Configure the Storage Pool:

 

There is no limitations to keep the guests in the shared filesystem. But however if you keep the guests in shared filesystem , you can easily migrate the VM’s from one host to another. The latest KVM version supports the Live VM migration(This is similar to vMotion in VMware ).  The default storage pool path is /var/lib/libvirt/images.

In my tutorial, I am going to use NFS as shared filesystem.

 

1. My NFS server IP is 192.168.203.1 . Mount the new NFS share on mountpoint /var/lib/libvirt/images.

[root@UA-HA ~]# df -h /var/lib/libvirt/images
Filesystem            Size  Used Avail Use% Mounted on
192.168.203.1:/D/NFS  149G  121G   29G  82% /var/lib/libvirt/images
[root@UA-HA ~]#

 

2. List the storage pool.

[root@UA-HA ~]# virsh pool-list
 Name                 State      Autostart
-------------------------------------------

[root@UA-HA ~]#

 

3. Create the new storage pool with name of  “default”.

[root@UA-HA ~]# virsh pool-build default
Pool default built

[root@UA-HA ~]#

 

4. Start the storage pool.

[root@UA-HA ~]# virsh pool-start default
Pool default started

[root@UA-HA ~]# 
[root@UA-HA ~]# virsh pool-list
 Name                 State      Autostart
-------------------------------------------
 default              active     yes

[root@UA-HA ~]#

 

5. Check the storage pool info.

[root@UA-HA ~]# virsh pool-info default
Name:           default
UUID:           3599dd8a-edef-4c00-9ff5-6d880f1ecb8b
State:          running
Persistent:     yes
Autostart:      yes
Capacity:       148.46 GiB
Allocation:     120.35 GiB
Available:      28.11 GiB

[root@UA-HA ~]# 

The storage pool information will match with the NFS mount.  (Actually calculates the "/var/lib/libvirt/images" available disk space.)
[root@UA-HA ~]# df -h /var/lib/libvirt/images
Filesystem            Size  Used Avail Use% Mounted on
192.168.203.1:/D/NFS  149G  121G   29G  82% /var/lib/libvirt/images
[root@UA-HA ~]#

 

We have prepared the host to create the new virtual machines. In the next article, we will see that how to create the new guest using CLI.

 

Share it ! Comment it !! Be Sociable !!!

The post RHEL 7.2 – Configuring KVM hosts – Part 3 appeared first on UnixArena.

Launch the first KVM instance using CLI – Part 4

$
0
0

Provisioning new VM’s (guests) using “virt-install” binary is straight forward. virt-install  can be run in interactive or non-interactive mode. This command have more options but easy to remember since those are very meaningful. This article is going to demonstrate the VM creation using virt-install tool in non-interactive mode. You can also use GUI (VMM – Virtual Machine Manager ) to provision the VM.

Let’s prepare the VM details before kick of the virt-install.

VM Name UAKVM1
Network  bridge:br0
RAM 1024
CPU 1
DISK 4GB
CD-ROM /var/tmp/rhel-server-7.2-x86_64-dvd.iso

 

1. Login to the KVM host as root user with X11 forwarding enabled. I am using MobaXterm 8.2 Personal Edition to connect the KVM host with ssh session.

     ┌────────────────────────────────────────────────────────────────────┐
     │                         • MobaXterm 8.2 •                          │
     │            (SSH client, X-server and networking tools)             │
     │                                                                    │
     │ ➤ SSH session to root@192.168.203.134                              │
     │   • SSH compression : ✔                                            │
     │   • SFTP Browser    : ✔                                            │
     │   • X11-forwarding  : ✔  (remote display is forwarded through SSH) │
     │   • DISPLAY         : ✔  (automatically set on remote server)      │
     │                                                                    │
     │ ➤ For more info, ctrl+click on help or visit our website           │
     └────────────────────────────────────────────────────────────────────┘

Last login: Mon Dec 14 17:04:24 2015 from 192.168.203.1
[root@UA-HA ~]#

 

2. Here is the supported OS variant on KVM hypervisors. (Most recent list)

win7                 : Microsoft Windows 7
vista                : Microsoft Windows Vista
winxp64              : Microsoft Windows XP (x86_64)
winxp                : Microsoft Windows XP
win2k                : Microsoft Windows 2000
win2k8               : Microsoft Windows Server 2008
win2k3               : Microsoft Windows Server 2003
openbsd4             : OpenBSD 4.x
freebsd8             : FreeBSD 8.x
freebsd7             : FreeBSD 7.x
freebsd6             : FreeBSD 6.x
solaris9             : Sun Solaris 9
solaris10            : Sun Solaris 10
opensolaris          : Sun OpenSolaris
netware6             : Novell Netware 6
netware5             : Novell Netware 5
netware4             : Novell Netware 4
msdos                : MS-DOS
generic              : Generic
debianwheezy         : Debian Wheezy
debiansqueeze        : Debian Squeeze
debianlenny          : Debian Lenny
debianetch           : Debian Etch
fedora18             : Fedora 18
fedora17             : Fedora 17
fedora16             : Fedora 16
fedora15             : Fedora 15
fedora14             : Fedora 14
fedora13             : Fedora 13
fedora12             : Fedora 12
fedora11             : Fedora 11
fedora10             : Fedora 10
fedora9              : Fedora 9
fedora8              : Fedora 8
fedora7              : Fedora 7
fedora6              : Fedora Core 6
fedora5              : Fedora Core 5
mageia1              : Mageia 1 and later
mes5.1               : Mandriva Enterprise Server 5.1 and later
mes5                 : Mandriva Enterprise Server 5.0
mandriva2010         : Mandriva Linux 2010 and later
mandriva2009         : Mandriva Linux 2009 and earlier
rhel7                : Red Hat Enterprise Linux 7
rhel6                : Red Hat Enterprise Linux 6
rhel5.4              : Red Hat Enterprise Linux 5.4 or later
rhel5                : Red Hat Enterprise Linux 5
rhel4                : Red Hat Enterprise Linux 4
rhel3                : Red Hat Enterprise Linux 3
rhel2.1              : Red Hat Enterprise Linux 2.1
sles11               : Suse Linux Enterprise Server 11
sles10               : Suse Linux Enterprise Server
opensuse12           : openSuse 12
opensuse11           : openSuse 11
ubuntutrusty         : Ubuntu 14.04 LTS (Trusty Tahr)
ubuntusaucy          : Ubuntu 13.10 (Saucy Salamander)
ubunturaring         : Ubuntu 13.04 (Raring Ringtail)
ubuntuquantal        : Ubuntu 12.10 (Quantal Quetzal)
ubuntuprecise        : Ubuntu 12.04 LTS (Precise Pangolin)
ubuntuoneiric        : Ubuntu 11.10 (Oneiric Ocelot)
ubuntunatty          : Ubuntu 11.04 (Natty Narwhal)
ubuntumaverick       : Ubuntu 10.10 (Maverick Meerkat)
ubuntulucid          : Ubuntu 10.04 LTS (Lucid Lynx)
ubuntukarmic         : Ubuntu 9.10 (Karmic Koala)
ubuntujaunty         : Ubuntu 9.04 (Jaunty Jackalope)
ubuntuintrepid       : Ubuntu 8.10 (Intrepid Ibex)
ubuntuhardy          : Ubuntu 8.04 LTS (Hardy Heron)
virtio26             : Generic 2.6.25 or later kernel with virtio
generic26            : Generic 2.6.x kernel
generic24            : Generic 2.4.x kernel

 

Note: In this setup , I have installed the virt-install & virt-manager packages on KVM hypervisor node itself.

 

3. Create the new KVM virtual machine using the following command.

[root@UA-HA ~]#  virt-install --connect qemu:///system --virt-type kvm --network bridge:br0 --name UAKVM2 --description "First RHEL7 KVM Guest" --os-variant rhel7 --ram=1024 --vcpus=1 --disk size=4 --os-type=linux --graphics vnc,password=123456 --cdrom /var/www/html/rhel-server-7.2-x86_64-dvd.iso

Starting install...
Allocating 'UAKVM2-3.qcow2'                                                                                                                   | 4.0 GB  00:00:00
Creating domain...                                                                                                                            |    0 B  00:00:00

 

Are you confused with many options ?  It’s very simple.

  Options     | Values                | Description
====================================================================================
--connect     |qemu:///system         | Connect to the localhost KVM
--virt-type   |kvm                    | Specify the virtualization type as kvm or Xen 
--network     |bridge:br0             | Specify the bridge for network connectivity
-name         |UAKVM2                 | Virtual Machine Name
--description |First RHEL7 KVM Guest  | Provide the VM description 
--os-variant  |rhel7                  | Provide the OS-variant name
--ram         |1024                   | Set the VM memory to 1GB
--vcpus       |1	              | Set the No.of.CPU cores
-disk         |4		      | Specify the virtual disk size in GB
--os-type     |linux		      | Specify the OS type
--graphics    |vnc,password=123456    | Specify the graphics type & VNC password
--cdrom       |/path_to_iso           | Specify the RHEL 7 ISO image path

 

If you do not have the ISO image locally, you can specify the http link using -location option.

 

4. The above command will automatically open a graphical VNC window for the guest.

UAKVM2 Guest VNC window
UAKVM2 Guest VNC window

 

5. Enter the password what you have given in the virt-install command.

UAKVM2 VNC session
UAKVM2 VNC session

 

6. Complete the guest Machine installation.

 

7.  If you didn’t get the VNC session pop-up automatically, just execute the “virt-viewer” command. This should bring up the running machines list to connect the VM’s console.

[root@UA-HA ~]# virt-viewer
UAKVM2 console
UAKVM2 console

 

8. In the KVM hypervisor, you can list the VM’s using “virsh list” command.

[root@UA-HA images]# virsh list
 Id    Name                           State
----------------------------------------------------
 15    UAKVM2                         running

[root@UA-HA images]#

 

9. To see the VM’s resource utilization from the KVM host view , use virt-top command.

virt-top 00:23:28 - x86_64 2/2CPU 2594MHz 3784MB
2 domains, 1 active, 1 running, 0 sleeping, 0 paused, 1 inactive D:0 O:0 X:0
CPU: 1.3%  Mem: 1024 MB (1024 MB by guests)

   ID S RDRQ WRRQ RXBY TXBY %CPU %MEM    TIME   NAME
   15 R    0    1    0    0  1.3 27.0   1:20.30 UAKVM2
    -                                           (UAKVM1)

 

10. If you want to halt the VM, use the “virsh destroy” command to stop the VM.

[root@UA-HA images]# virsh destroy UAKVM2
Domain UAKVM2 destroyed
[root@UA-HA images]#

 

11.List the VM’s again. The halted VM’s are not listed in “virsh list” command. You must use the “–all” option to see the stopped VM’s.

[root@UA-HA images]# virsh list
 Id    Name                           State
----------------------------------------------------

[root@UA-HA images]# virsh list  --all
 Id    Name                           State
----------------------------------------------------
 -     UAKVM2                         shut off

[root@UA-HA images]#

 

12. To power on/start the VM , use the following command.

[root@UA-HA images]# virsh start UAKVM2
Domain UAKVM2 started

[root@UA-HA images]# virsh list
 Id    Name                           State
----------------------------------------------------
 16    UAKVM2                         running

[root@UA-HA images]#

 

Explorer VM files:

1. The KVM guest’s configuration file will be created in the following path.

[root@UA-HA libvirt]# cd /etc/libvirt/qemu/
[root@UA-HA qemu]# ls -lrt
total 8
drwx------. 3 root root   40 Dec 14 09:13 networks
-rw-------. 1 root root 3854 Dec 15 00:19 UAKVM2.xml
[root@UA-HA qemu]#

 

2. Use the following command to view the XML configuration file for the Guest VM.

[root@UA-HA qemu]# cat  UAKVM2.xml
[root@UA-HA qemu]#

 

3. You can also use  “virsh” command to view the VM’s configuration.

[root@UA-HA ~]# virsh dumpxml UAKVM2

 

4. How to identify the VM’s storage path ?

[root@UA-HA qemu]# virsh dumpxml UAKVM2 |grep -i "source file"
      
[root@UA-HA qemu]#

 

5. The disk qcow2 image file will be created on “/var/lib/libvirt/images” . (which is default path unless you specify during the VM creation).

[root@UA-HA images]# ls -lrt /var/lib/libvirt/images
total 1121856
-rw-------. 1 root root 4295884800 Dec 15 00:29 UAKVM2-3.qcow2
[root@UA-HA images]#

If you want to trigger the installation from the other linux host, use the following command.

[root@UA-HA ~]#  virt-install --connect qemu+ssh://root@192.168.203.134/system --virt-type kvm --network bridge:br0 --name UAKVM2 --description "First RHEL7 KVM Guest" --os-variant rhel7 --ram=1024 --vcpus=1 --disk size=4 --os-type=linux --graphics vnc,password=123456 --cdrom /var/www/html/rhel-server-7.2-x86_64-dvd.iso

Starting install...
Allocating 'UAKVM2-3.qcow2'                                                                                                                   | 4.0 GB  00:00:00
Creating domain...                                                                                                                            |    0 B  00:00:00

 

How to use the virt-install command from other Linux node ? (Non-Hypervisor node = Management Node)

1. Login to the Linux host . (This is not your KVM hosts. Just assume that this node will act like management node)

2. Install the following packages.

[root@UA-KVM1 ~]# yum install virt-viewer  virt-install virt-manager vnc*

 

3. Enable the X11 forwarding.

[root@UA-KVM1 ~]# grep X11 /etc/ssh/sshd_config
X11Forwarding yes
[root@UA-KVM1 ~]#

 

4. Install “openssh-askpass” to connect the KVM host .

[root@UA-KVM1 ~]# yum install openssh-askpass
Loaded plugins: product-id, search-disabled-repos, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Resolving Dependencies
--> Running transaction check
---> Package openssh-askpass.x86_64 0:6.6.1p1-22.el7 will be installed
--> Processing Dependency: libgdk-x11-2.0.so.0()(64bit) for package: openssh-askpass-6.6.1p1-22.el7.x86_64
--> Processing Dependency: libgtk-x11-2.0.so.0()(64bit) for package: openssh-askpass-6.6.1p1-22.el7.x86_64
--> Running transaction check
---> Package gtk2.x86_64 0:2.24.28-8.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=====================================================================================================================================================================
 Package                                    Arch                              Version                                   Repository                              Size
=====================================================================================================================================================================
Installing:
 openssh-askpass                            x86_64                            6.6.1p1-22.el7                            repo-update                             72 k
Installing for dependencies:
 gtk2                                       x86_64                            2.24.28-8.el7                             repo-update                            3.4 M

Transaction Summary
=====================================================================================================================================================================
Install  1 Package (+1 Dependent package)

Total download size: 3.5 M
Installed size: 13 M
Is this ok [y/d/N]: y
Downloading packages:
(1/2): openssh-askpass-6.6.1p1-22.el7.x86_64.rpm                                                                                              |  72 kB  00:00:00
(2/2): gtk2-2.24.28-8.el7.x86_64.rpm                                                                                                          | 3.4 MB  00:00:00
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                 11 MB/s | 3.5 MB  00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : gtk2-2.24.28-8.el7.x86_64                                                                                                                         1/2
  Installing : openssh-askpass-6.6.1p1-22.el7.x86_64                                                                                                             2/2
  Verifying  : openssh-askpass-6.6.1p1-22.el7.x86_64                                                                                                             1/2
  Verifying  : gtk2-2.24.28-8.el7.x86_64                                                                                                                         2/2

Installed:
  openssh-askpass.x86_64 0:6.6.1p1-22.el7

Dependency Installed:
  gtk2.x86_64 0:2.24.28-8.el7

Complete!
[root@UA-KVM1 ~]#

 

5. Configure the SSH password less authentication for root from KVM hosts to Management host.

[root@UA-KVM1 .ssh]# scp id_rsa.pub root@192.168.203.134:/root/.ssh/authorized_keys
root@192.168.203.134's password:
id_rsa.pub                                                                                                                         100%  394     0.4KB/s   00:00
[root@UA-KVM1 .ssh]#

 

6. You should be able to login as root from management system to KVM host.

[root@UA-KVM1 ~]# ssh 192.168.203.134
Last login: Tue Dec 15 08:44:35 2015 from 192.168.203.137
[root@UA-HA ~]#

 

7. Create the new VM using virt-install

[root@UA-KVM1 ~]# virt-install --connect qemu+ssh://root@192.168.203.134/system --virt-type kvm --network bridge:br0 --name UAKVM3 --description "First RHEL7 KVM Guest" --os-variant rhel7 --ram=1024 --vcpus=1 --disk size=4 --os-type=linux --graphics vnc,password=123456 --cdrom /var/www/html/rhel-server-7.2-x86_64-dvd.iso
root@192.168.203.134's password:

Starting install...
Allocating 'UAKVM3.qcow2'                                                                                                                     | 4.0 GB  00:00:00
Creating domain...                                                                                                                            |    0 B  00:00:00

** (virt-viewer:11764): WARNING **: Couldn't connect to accessibility bus: Failed to connect to socket /tmp/dbus-

 

8. You can also launch the KVM guest console from Management node.

[root@UA-KVM1 ~]# virt-viewer --connect qemu+ssh://root@192.168.203.134/system

** (virt-viewer:11929): WARNING **: Couldn't connect to accessibility bus: Failed to connect to socket /tmp/dbus-6XZ1eVgijP: Connection refused

Hope this article is informative to you. Share it ! Comment it !! Be Sociable !!!

The post Launch the first KVM instance using CLI – Part 4 appeared first on UnixArena.

Deploy KVM instance using Virt-Manger (VMM)- GUI – Part 5

$
0
0

Virt-Manager is powerful GUI tool to manage the multiple KVM hosts and it’s associated VM’s. It support’s XEN virtualization too. It can be installed on KVM hypervisor hosts or on centralized management host to manage the multiple hyper-visors from one place.  Using virt-manager,  you can start, pause(suspend), shutdown VMs, display VM details (Ex: VCPUs, memory, disk space), add devices to VMs, clone the VM’s and create a new VMs. The latest version provides the interface to migrate the running VM from KVM host to another if we use the shared storage pool(Live migration).

Let’s launch the virt-manager.

Launching the virt-manger from KVM hosts:

1.Login to the KVM host as root user .  (Make sure X11 forwarding is enabled & you have installed the terminal emulation software : Ex: Moba Xterm)

2. Execute virt-manager command to launch the GUI.

virt-Manager
virt-Manager

 

3. Click the highlighted icon to create the new VM.  (See the above screenshot).

 

4. Specify the installation source method. Let me use the local ISO image.

Select the KVM guest Installation source
Select the KVM guest Installation source method

 

5. Specify the ISO image path .

Specify the ISO image location - KVM guest
Specify the ISO image location – KVM guest

 

6. Allocate the resources for KVM guest.  (Ex: vCPU’s & Memory)

Set the resource limits - KVM
Set the resource limits – KVM

 

7. Create the virtual disk  for Guest installation (root disk of the guest OS).

Specify the disk image size - KVM
Specify the disk image size – KVM Guest

 

8. Enter the VM name and specify the bridge which we have created earlier (Refer the KVM tutorial)  to provide the external network access to the Guest.

Enter the VM name and specify bridge
Enter the VM name and specify bridge

 

9. Click finish to create the VM.

KVM - Guest Domain creation 8
KVM – Guest Domain creation

 

once the VM is created, you will get the console like below.

KVM Guest console
KVM Guest console

 

You can complete the VM installation.

 

10 .To view the KVM guest hardware details , click on the highlighted icon.

To see the VM hardware details - KVM
To see the VM hardware details – KVM

 

You can also add the new virtual hardware using the above window.  (Adding new disks , NIC etc..)

 

11. KVM also provides the force VM power off option. (Just like removing the power card in physical server.)

KVM guest - Force power off Option
KVM guest – Force power off Option

 

12. KVM support snapshots . virt-manager provides the interfaces to take the running machine snapshots.

Take the snapshot of KVM guest
Take the snapshot of KVM guest

 

Click on the “+” icon to create the snapshot for VM.

Guest Snapshot name
Guest Snapshot name

 

13. VMM (Virtual Machine Manager) also provides the interface to migrate or clone the VM . Just click VM’s Virtual Machine tab.

KVM Guest - Migrate clone Delete
KVM Guest – Migrate clone Delete

 

14. You can launch the VM’s console any time by just double clicking the listed VM.

Clone & Live Migration option
Clone & Live Migration option

 

Centralized KVM Hosts Management: (Like VMware vSphere Client )

If you have installed the VMM  on other than KVM node, you need to connect the KVM hosts using ssh.

1. Configure the ssh password less authentication for root user between KVM Management node and KVM hosts.

2. Login to the Management node where you have installed virt-manager.  (with X11 forwarding ).

3.Execute the virt-manager command to launch the VMM GUI.

VMM on Management Node - KVM
VMM on Management Node – KVM

 

4. Enter the KVM host IP..

Connect Remote KVM host - VMM
Connect Remote KVM host – VMM

 

You should be able to get the window like below .

KVM Management Node
KVM Management Node

 

Hope this article is informative to you.

Share it ! Comment it !! Be Sociable !!!

The post Deploy KVM instance using Virt-Manger (VMM)- GUI – Part 5 appeared first on UnixArena.

KVM – Virtual Machine Manager Font issue – VMM GUI

$
0
0

Virtual Machine Manager  (VMM or virt-Manager) is GUI tool to manage the KVM hypervisors. This tool can be installed on KVM hosts or in remote system to manage the VM’s. When I tired to access the virt-manager via SSH X11 forwarding on RHEL 7.2 , I got the GUI without any fonts (or with junk letters). Redhat should make the required fonts packages to link “virt-manager” package dependencies to prevent the problem.

Screenshot:

Virt-Manager with Junk charectors
Virt-Manager with Junk Letters

 

To fix the issue, you must install the required fonts for VMM. In RHEL 7.2 , I have installed the following fonts which has fixed my issue.

[root@UA-KVM1 ~]# yum install ghostscript-fonts.noarch urw-fonts.noarch
Loaded plugins: product-id, search-disabled-repos, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Resolving Dependencies
--> Running transaction check
---> Package ghostscript-fonts.noarch 0:5.50-32.el7 will be installed
---> Package urw-fonts.noarch 0:2.4-16.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=====================================================================================================================================================================
 Package                                      Arch                              Version                                 Repository                              Size
=====================================================================================================================================================================
Installing:
 ghostscript-fonts                            noarch                            5.50-32.el7                             repo-update                            324 k
 urw-fonts                                    noarch                            2.4-16.el7                              repo-update                            3.0 M

Transaction Summary
=====================================================================================================================================================================
Install  2 Packages

Total download size: 3.4 M
Installed size: 4.8 M
Is this ok [y/d/N]: y
Downloading packages:
(1/2): ghostscript-fonts-5.50-32.el7.noarch.rpm                                                                                               | 324 kB  00:00:00
(2/2): urw-fonts-2.4-16.el7.noarch.rpm                                                                                                        | 3.0 MB  00:00:00
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                 12 MB/s | 3.4 MB  00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : urw-fonts-2.4-16.el7.noarch                                                                                                                       1/2
  Installing : ghostscript-fonts-5.50-32.el7.noarch                                                                                                              2/2
  Verifying  : ghostscript-fonts-5.50-32.el7.noarch                                                                                                              1/2
  Verifying  : urw-fonts-2.4-16.el7.noarch                                                                                                                       2/2

Installed:
  ghostscript-fonts.noarch 0:5.50-32.el7                                                urw-fonts.noarch 0:2.4-16.el7

Complete!
[root@UA-KVM1 ~]#

 

Just tried to re-launch the virt-manager.

KVM virt-manager Fonts issue Fixed
KVM virt-manager Fonts issue Fixed

 

Hope this article informative to you .  Share it ! Comment it !! Be Sociable !!!

The post KVM – Virtual Machine Manager Font issue – VMM GUI appeared first on UnixArena.

Viewing all 369 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>