In this how-to blog post, I will Add Pure Storage iSCSI to Linux CentOS – Part 1. This the first part of two articles, where the final goal is to move Oracle DB from a NetApp NFS Shared to an iSCSI Lun from PureStorage.
In this case, the plan is to set the iSCSI initiator in the Linux and enable multipath and add PureStorage LUNs as mount points.
In the second article, I will explain how to move the Oracle DB from the NFS mount point to the iSCSI mount point without changing anything in the Oracle configurations.
Environment:
- Oracle 12c R2
- Linux CentOS v7
- NetApp FAS2240-4
- Pure Storage FlashArray //X20 R3 Model
Since we have two Oracle Servers connected to NetApp, I need to move them to our new PureStorage system. But since these are physical servers, before making any changes on production Oracle Servers, I build a Virtual POC with the same environment and replicate all the steps for the move.
All the steps in the process in this first part.
Linux / Oracle
- Install CentOS7.
- Created two NFS Volumes in NetApp (similar to that exists in the production Oracle)
- Installed Oracle 12c (in /netapp-Ora01 for applications and in /netapp-Ora02 for DB)
- Created DB and some tables with 1M random record to have some data.
Linux / Pure Storage
- Configured iSCSI in Linux with iSCSI initiator and multipath.
- Connected Linux to Pure Storage using four paths (2 for each interface)
- Created two iSCSI LUNs in Pure Storage and add them to the Linux Server
- Created two temporarily mount points in Linux (Pure-Ora01 and Pure-Ora01)
- Mount iSCSI devices into the new mount points
- Set Oracle and oinstall permissions to the mount points
- Testing multipath by disconnecting network interfaces to check if iSCSI was configured properly and mounts are still available
In both articles, I will not go through installing Linux CentOS v7 or Oracle 12c R2 because there are already many articles out there.
After Linux and Oracle are installed, let us start to configure Linux to have the iSCSI initiator service installed.
First, I will check if there are any updates to do on my CentOS
1 2 3 |
[root@oracle-rh-01-vmwarehome.lab /]# yum update |
Some updates were applied, and now we can continue.
First, we need to install iSCSI initiation tools.
1 2 3 |
[root@oracle-rh-01-vmwarehome.lab ~]# yum install iscsi-initiator-utils |
If you try to start your iSCSI initiator at this point, you will get:
1 |
[root@oracle-rh-01-vmwarehome.lab ~]# service iscsi status |
1 2 3 4 5 6 7 8 9 10 11 12 13 |
Redirecting to /bin/systemctl status iscsi.service ● iscsi.service - Login and scanning of iSCSI devices Loaded: loaded (/usr/lib/systemd/system/iscsi.service; enabled; vendor preset: disabled) Active: inactive (dead) Condition: start condition failed at Sat 2021-03-27 23:48:39 CET; 6s ago ConditionDirectoryNotEmpty=/var/lib/iscsi/nodes was not met Docs: man:iscsiadm(8) man:iscsid(8) Mar 28 01:34:26 oracle-rh-01.vmwarehome.lab systemd[1]: Unit iscsi.service cannot be reloaded because it is inactive. Mar 28 01:34:26 oracle-rh-01.vmwarehome.lab systemd[1]: Unit iscsi.service cannot be reloaded because it is inactive. |
This is because the iSCSI initiator is only started and active when their devices are connected. We will leave for now the iSCSI initiator service.
Now we will install a multipath package.
1 2 3 |
[root@oracle-rh-01-vmwarehome.lab ~]# yum install device-mapper-multipath device-mapper-multipath-libs |
Enable default multipath configuration file and start the multipath daemon
1 2 3 |
[root@oracle-rh-01-vmwarehome.lab ~]# mpathconf --enable --with_multipathd y |
Now since we are using Pure Storage, we need to add to Pure Storage multipath settings to /etc/multipach.conf
This will enable multipath and Round Robin (RR) policy at boot.
1 2 3 |
[root@oracle-rh-01 /]# vi etc/multipath.conf |
Add the following to the etc/multipath.conf. This is for RHEL 7.x and CentOS 7. For different Linux or versions, you need to add different parameters. Check Pure Storage documentation for your Linux version parameters.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
defaults { polling_interval 10 } devices { device { vendor "PURE" product "FlashArray" hardware_handler "1 alua" path_selector "queue-length 0" path_grouping_policy group_by_prio prio alua path_checker tur fast_io_fail_tmo 10 failback immediate no_path_retry 0 dev_loss_tmo 60 } } |
After adding the DM-Mutipathd parameters, restart the service: systemctl restart multipathd.service
Next, check your IQN and then create a Target in your Pure Storage.
1 2 3 4 |
[root@oracle-rh-01-vmwarehome.lab ~]# cat /etc/iscsi/initiatorname.iscsi InitiatorName=iqn.1994-05.com.redhat:25c0b6a954f4 |
While you create a new Target in your Pure Storage, rebooted your Linux.
How to create Host and add IQN in Pure Storage
First, create a host. Select the Storage tab in the main menu, then Hosts, and use the + icon to create the host.
After the Host is created, select the Host created and add the IQN target. Add the IQN copy from the above Linux command.
Next, add your LUNs that you create previously to the Host.
At this moment, after the Linux reboot, Linux should already see your LUNs.
To see the created LUNs in Linux, you need to use the iSCSI initiator to discover the iSCSI Portal.
Now this command will depend on how many Network interfaces are used in your Pure Storage.
You can have this information directly from your Pure Storage (connecting to Pure Storage Console), using the command: pureport list
For my case, I have 4 interfaces per controller.
So I need to run the discovery command in each IP.
Note: In Pure Storage, you should always use CHAP policy and set IQN Target per IP host, so that source hosts can only see their LUNs and not other LUNs in Pure Storage that may not use CHAP or are set per IP host.
1 2 3 4 5 6 |
iscsiadm -m discovery -t st -p 192.100.27.95:3260 iscsiadm -m discovery -t st -p 192.100.27.96:3260 iscsiadm -m discovery -t st -p 192.100.27.166:3260 iscsiadm -m discovery -t st -p 192.100.27.167:3260 |
Example:
1 2 3 4 5 |
[root@oracle-rh-01-vmwarehome.lab ~]# iscsiadm -m discovery -t st -p 192.100.27.95:3260 192.100.27.95:3260,21 iqn.2010-06.com.purestorage:flasharray.2c7xxxxxxebb80 192.100.27.96:3260,21 iqn.2010-06.com.purestorage:flasharray.2c7xxxxxxebb80 |
Some useful commands:
- Use the best interface to connect:
iscsiadm -m discovery -t st -p 192.168.10.252 -I default -P 1
- Use a specific interface in your host to connect to your Pure Storages:
iscsiadm -m discovery -t st -p 192.100.27.95 -I ieth0 (your interface/iface name)
- Create a new iface to use a specific Network interface:
iscsiadm -m iface -I iface_name(for better management your interface name) –op=new
- Delete an iface:
iscsiadm -m iface -I iface_name –op=delete
- Check iface that exists in your interface iSCSI configurations:
iscsiadm -m iface
Now set the login to Pure Storage Controller(run the same command login command for each controller you have in your Storage System).
1 2 3 4 5 |
[root@oracle-rh-01-vmwarehome.lab ~]# iscsiadm -m node -p 192.100.27.95 --login Logging in to [iface: default, target: iqn.2010-06.com.purestorage:flasharray.2c7xxxxxxebb80, portal: 192.100.27.95,3260] (multiple) Login to [iface: default, target: iqn.2010-06.com.purestorage:flasharray.2c7xxxxxxebb80, portal: 192.100.27.95,3260] successful. |
Set automatic login on boot: iscsiadm -m node -L automatic
Some useful commands:
- Check all Targets on your Storage System:
iscsiadm -m node
- Connect to a specific Target:
iscsiadm -m node -T iqn.2000-01.com.synology:HomeStorage.Oracle.75c007999 -p 192.168.10.252 -l
- Remove a specific Target connection:
iscsiadm -m node -T iqn.2000-01.com.synology:HomeStorage.Oracle.75c007999 -p 192.168.10.252 -u
Now check your multipath device-mapped ID: multipath -ll
As we can see above, I have the two LUNs(1Tb each) with 4 connections each. So we are ok, and now we can start to mount our LUNs to mount points.
Checking my partitions now, I see those two new disks/volumes: fdisk -l
We see 4 connections per volume, and we get 8 records, plus the mapper (I am only displaying here two). Always double-check the ID with the mapper with the same ID you have your volumes identified.
Now lets us create a mount point so that we can mount our volumes to it.
1 2 3 |
[root@oracle-rh-01-vmwarehome.lab ~]# mkdir /Pure-Ora01 /Pure-Ora02 |
Map using device-mapped ID that we listed above.
1 2 3 4 |
[root@oracle-rh-01-vmwarehome.lab ~]# mkfs.ext4 /dev/mapper/3624a93707874518ac7ac4829000130a7 [root@oracle-rh-01-vmwarehome.lab ~]# mkfs.ext4 /dev/mapper/3624a93707874518ac7ac4829000130a6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
mke2fs 1.42.9 (28-Dec-2013) Discarding device blocks: done Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=1024 blocks 65536000 inodes, 262144000 blocks 13107200 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=2409627648 8000 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done |
Now mount your volume to your mount point.
1 2 3 4 |
[root@oracle-rh-01-vmwarehome.lab ~]# mount /dev/mapper/3624a93707874518ac7ac4829000130a6 /Pure-Ora01 [root@oracle-rh-01-vmwarehome.lab ~]# mount /dev/mapper/3624a93707874518ac7ac4829000130a7 /Pure-Ora02 |
Check if the mount is using the proper volumes/devices
1 2 3 4 5 6 7 8 |
[root@oracle-rh-01-vmwarehome.lab ~]# df -h /Pure-Ora01 Filesystem Size Used Avail Use% Mounted on /dev/mapper/3624a93707874518ac7ac4829000130a7 985G 77M 935G 1% /Pure-Ora01 [root@oracle-rh-01-vmwarehome.lab ~]# df -h /Pure-Ora02 Filesystem Size Used Avail Use% Mounted on /dev/mapper/3624a93707874518ac7ac4829000130a6 985G 77M 935G 1% /Pure-Ora02 |
Also do not forget to add both mount to your Linux fstab
1 2 3 4 |
/dev/mapper/3624a93707874518ac7ac4829000130a6 /Pure-Ora01 ext4 _netdev,rw 0 0 /dev/mapper/3624a93707874518ac7ac4829000130a7 /Pure-Ora02 ext4 _netdev,rw 0 0 |
Testing the rw on the new mounts
1 2 3 4 |
[root@oracle-rh-01-vmwarehome.lab ~]# touch /Pure-Ora01/file.pure [root@oracle-rh-01-vmwarehome.lab ~]# touch /Pure-Ora02/file.pure |
All permissions are ok, and the new disks are accessible and rw.
Since I need to move Oracle to these new volumes, I will already set Oracle permissions for the final step.
1 2 3 4 5 |
[root@oracle-rh-01-vmwarehome.lab ~]# chown -R oracle:oinstall /Pure-Ora01 /Pure-Ora02 [root@oracle-rh-01-vmwarehome.lab ~]# chmod -R 775 /Pure-Ora01 /Pure-Ora02 [root@oracle-rh-01-vmwarehome.lab ~]# chmod g+s /Pure-Ora01 /Pure-Ora02 |
Double-check if all permissions were set, and this finishes the configuration of the Linux server with new volumes from Pure Storage.
So that this blog post is not too big, the Oracle move from the NetApp NFS Shares to this new Pure Storage iSCSI Volumes I will explain in the second part.
Both articles:
I hope this blog post about how to add Pure Storage iSCSI to Linux CentOS – Part 1 was useful.
Share this article if you think it is worth sharing. If you have any questions or comments, comment here, or contact me on Twitter.
Leave A Comment
You must be logged in to post a comment.