Cheap VPS & Xen Server

Residential Proxy Network - Hourly & Monthly Packages

How To Build A Standalone File Server With Nexenta 3.0 Beta2


Nexenta is a project developing a debian user-land for the OpenSolaris kernel. This provides all of the advantages of apt as a package respoitory (based on the Ubuntu LTS apt repository, currently using 8.04) as well as the advantages of the ZFS filesystem. In the resulting setup every user can have his/her own home directory accessible via the SMB protocol or NFS with read-/write access.

1 Preliminary Note

A term you should be familiar with is a zpool. A zpool is similar to a logical volume group. ZFS volumes can have multiple zpools in them, as I will demonstrate. Some advantages of ZFS are built in compression and deduplication, as well as being easy to manipulate, create and destroy pools. OpenSolaris (and by proxy, Nexenta) has already integrated file-serving protocols such as NFS and Samba with the ZFS filesystem, so we won’t need to install a Samba service. I have culled information about the zfs and zpool commands from various sources. All sources used for the creation of this article are linked at the end.

I’m using a system with the hostname server1.example.com and the IP address 192.168.0.100.

 

2 Installing Nexenta

You can obtain the ISO for Nexenta here: http://www.nexenta.org/projects/site/wiki/DownloadUnstable

I am using the unstable version of Nexenta 3.0 Beta2. The reason for this is that it is based on OpenSolaris build NV134, which has dedup and zfs compression capabilities. Dedup is a function that allows for deduplication of files within the filesystem, and zfs compression is a built-in comression algorithm to compress filesizes when possible.

Download and extract the ISO and burn to disk (or boot from ISO directly if you are installing this as a virtual machine.)

1-isoboot

First, you’ll see the welcome screen:

2-installerwelcome

Next, set your language, locale and time:

3-selectlang

Next, set your language, locale and time:

4-selectlocale

5-selectlocale2

6-selecttime

7-confirmtime

Now we’ll set up the disk the OS will be installed on. If you have multiple disks only select the disk you want the system OS to be installed to.

8-selectdisk

If you are using multiple disks that will be used for a large file repository (such as expected large shares for media, or each user’s home directory on a much larger array of disks) do not select them at installation. I will show how to add disks and create shares/move home directories later on.

Confirm the disk you want to install on:

9-confirmdisk

10-partitioning

Take note here. The format for disk labels in Nexenta is not the same as OpenSolaris or Linux. Your disks will be something like c0d0, c0d1, c1d1 etc. This will be important later if/when we add disks to zpools.

You should see the progress bar showing the installation of the base system now.

11-installing

Next you will be asked to set the root password:

12-setroot

13-rootconfirm

After that, you will create your user. This is similar to the user created in a Ubuntu install. It will have sudo privileges.

14-createuser

15-userpwd

16-userconfirm

17-userconfirm2

Next, define your host information. In this example, I define the host name as server1.example.com.

18-definehostinfo

19-hostidconfirm

Next you’ll set up your networking. If you are not setting a static IP address, choose DHCP here. Otherwise, enter your static IP, default gateway and DNS server information.

20-configurenetwork

21-dhcpyesorno

Now the install will complete by setting up services. Once you see the completion screen, eject the media and reboot.

22-completinginstall

23-installcomplete

24-firstboot

3 Add Disks and Create ZPools:

First, let’s see what disks are available if we’re adding some. On the console or from an ssh login to server1.example.com, sudo su – to become root:

dfed@server1:~$ sudo su –

[sudo] password for dfed:
root@server1:~#

Then type the following to see which disks are available:

format

This will give you a readout similar to the following:

AVAILABLE DISK SELECTIONS:
0. c0d0
/pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0
1. c0d1
/pci@0,0/pci-ide@1,1/ide@0/cmdk@1,0
2. c1d1
/pci@0,0/pci-ide@1,1/ide@1/cmdk@1,0
Specify disk (enter its number):

Make note of the disk names and control-c to exit this. You do not need to format a disk before adding it to a zpool. Specifically, make note of the disks that are not your system OS installation. You should have that disk name notated from above. In this case, c0d1 and c1d1 are the two disks I want to add, both 2T in size.

You have several options for creating the pool containing your disks. If you want to just create a concat of the disks, you would create a single zpool. If you wanted a mirrored storage pool (equivalent of raid 1) you would create a mirrored pool. You can also create a RAID-Z pool which is the equivalent of a raid 5 array. Since we have only two disks and I am more interested in space than redundancy, I will create a simple concat pool by doing this:

root@server1:~# zpool create pool1 c0d1 c1d1

To create a mirrored pool, you would do the following:

root@server1:~# zpool create pool1 mirror c0d1 mirror c1d1

If you had multiple disks to mirror (more than 2) you would do as follows:

root@server1:~# zpool create pool1 mirror disk1 disk2 mirror disk3 disk4

Where disk1,disk2,disk3,disk4 would be the system names of said disks. To create a RAID-Z:

root@server1:~# zpool create pool1 raidz disk1 disk2 disk3 disk4 disk5

To verify the pool’s creation:

root@server1:~# zpool list

NAME      SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
pool1    3.97T   213K  3.97T     0%  1.00x  ONLINE  –
syspool   127G  1.31G   126G     1%  1.00x  ONLINE  –

And we see where I have created a concat of the two 2T disks shown earlier in the list. To destroy a zpool and start over, simply type:

root@server1:~# zpool destroy pool1

Now that we’ve created our pool, you can check that it is mounted in /. If you ls / you will see pool1 as a directory. Let’s say you didn’t want that name in the filesystem, and wanted it to mount, instead, at /opt. You would do the following:

root@server1:/# zpool create -m /opt pool1 c0d1 c1d1
root@server1:/# zpool list

NAME      SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
pool1    3.97T   109K  3.97T     0%  1.00x  ONLINE  –
syspool   127G  1.31G   126G     1%  1.00x  ONLINE  –

root@server1:/# df -h

Filesystem             size   used  avail capacity  Mounted on
syspool/rootfs-nmu-000
125G  1007M   123G     1%    /
/devices                 0K     0K     0K     0%    /devices
/dev                     0K     0K     0K     0%    /dev
ctfs                     0K     0K     0K     0%    /system/contract
proc                     0K     0K     0K     0%    /proc
mnttab                   0K     0K     0K     0%    /etc/mnttab
swap                   1.5G   316K   1.5G     1%    /etc/svc/volatile
objfs                    0K     0K     0K     0%    /system/object
sharefs                  0K     0K     0K     0%    /etc/dfs/sharetab
/usr/lib/libc/libc_hwcap1.so.1
124G  1007M   123G     1%    /lib/libc.so.1
fd                       0K     0K     0K     0%    /dev/fd
swap                   1.5G     0K   1.5G     0%    /tmp
swap                   1.5G    36K   1.5G     1%    /var/run
pool1                  3.9T    21K   3.9T     1%    /opt

The -m /path/to/file trigger allows you to mount this pool anywhere. With that in mind, I will now create the pool and mount it at /export/home. /export/home is the location of user home directories by default in both OpenSolaris and Nexenta. To do this, I will have to move my current home directory out of /export/home and then return it once this is created.

root@server1:/# mv /export/home/dfed /opt/
root@server1:/# ls /export/home
root@server1:/# ls /opt

dfed

root@server1:/# zpool create -m /export/home pool1 c0d1 c1d1
root@server1:/# mv /opt/dfed /export/home/
root@server1:/# ls /export/home

dfed

Do a df -h to verify disk/mount sizes:

root@server1:/# df -h

Filesystem             size   used  avail capacity  Mounted on
syspool/rootfs-nmu-000
125G  1007M   123G     1%    /
/devices                 0K     0K     0K     0%    /devices
/dev                     0K     0K     0K     0%    /dev
ctfs                     0K     0K     0K     0%    /system/contract
proc                     0K     0K     0K     0%    /proc
mnttab                   0K     0K     0K     0%    /etc/mnttab
swap                   1.5G   316K   1.5G     1%    /etc/svc/volatile
objfs                    0K     0K     0K     0%    /system/object
sharefs                  0K     0K     0K     0%    /etc/dfs/sharetab
/usr/lib/libc/libc_hwcap1.so.1
124G  1007M   123G     1%    /lib/libc.so.1
fd                       0K     0K     0K     0%    /dev/fd
swap                   1.5G     0K   1.5G     0%    /tmp
swap                   1.5G    36K   1.5G     1%    /var/run
pool1                  3.9T    30K   3.9T     1%    /export/home

We have now set up the extra disks and are ready to set up users and share directories. If you are interested in a Samba standalone server, read on. If you are looking to set up NFS, skip to section 6.

 

4 Adding And Managing Users: Samba

At the end of the file /etc/pam.conf add the following line:

[...]
other password required pam_smb_passwd.so.1 nowarn
[...]

This will set the encryption level correctly for the user accounts being shared. Once added, you can create users as you see fit and when you set their passwords via the passwd command, it will encrypt their passwords in a Samba friendly manner. For your current user, you will need to reset the password with the passwd command before that user can use the samba services. To add a user:

root@server1:/# groupadd -g 1001 newuser
root@server1:/# useradd -u 1001 -g 1001 -s /bin/bash -c “New User” -d /export/home/newuser -m newuser

14 blocks

root@server1:/# passwd newuser

New Password:
Re-enter new Password:
passwd: password successfully changed for newuser

root@server1:/# ls -lha /export/home

total 6.0K
drwxr-xr-x 4 root    root    4 Apr 22 15:16 .
drwxr-xr-x 3 root    sys     3 Apr 22 11:51 ..
drwxr-xr-x 2 dfed    dfed    7 Apr 22 12:07 dfed
drwxr-xr-x 2 newuser newuser 8 Apr 22 15:16 newuser

Now we are ready to enable the Samba service and set up zfs Samba shares. If you are joining an Active Directory Domain, then skip ahead to that section. In the next section, we will set up the Samba service as a stand alone in a workgroup.

5 Samba Services and zfs:

Start the service:

root@server1:/# svcadm enable -r smb/server

If the following warning is issued, you can ignore it:

svcadm: svc:/milestone/network depends on svc:/network/physical, which has multiple instances

Set the workgroup name:

root@server1:/# smbadm join -w SHARING

After joining SHARING the smb service will be restarted automatically.
Would you like to continue? [no]: yes
Successfully joined SHARING

Great. Now we can enable shares for the users. User share rights in this setup are directly related to unix file permission settings. If I share /export/home/newuser then I must connect as newuser because ownership of that directory is newuser:newuser. Let’s set up a share. First, the directory we create this share on needs to be empty. If we are sharing the whole of a user’s directory (and not a folder in it) we need to move files in the directory out:

root@server1:/# mkdir /opt/tmp/
root@server1:/# mv /export/home/newuser/* /opt/tmp/; mv /export/home/newuser/.* /opt/tmp/

Verify the files all moved:

root@server1:/# ls -lha /export/home/newuser/; ls -lha /opt/tmp/

total 3.0K
drwxr-xr-x 2 newuser newuser 2 Apr 22 15:51 .
drwxr-xr-x 4 root    root    4 Apr 22 15:16 ..
total 9.5K
drwxr-xr-x 2 root    root       8 Apr 22 15:51 .
drwxr-xr-x 3 root    sys        3 Apr 22 15:50 ..
-rw-r–r– 1 newuser newuser  220 Apr 22 15:16 .bash_logout
-rw-r–r– 1 newuser newuser 2.9K Apr 22 15:16 .bashrc
-rw-r–r– 1 newuser newuser  964 Apr 22 15:16 .profile
-rw-r–r– 1 newuser newuser 1.1K Apr 22 15:16 local.cshrc
-rw-r–r– 1 newuser newuser  988 Apr 22 15:16 local.login
-rw-r–r– 1 newuser newuser 1002 Apr 22 15:16 local.profile

Ok, let’s create the zpool and share it:

root@server1:/# zfs create -o compression=gzip-9 -o dedup=on -o quota=100g -o casesensitivity=mixed -o nbmand=on -o sharesmb=on pool1/newuser

Let’s talk about some of the triggers in that command. The trigger “-o dedup=on” sets deduplication on allowing multiple instances of the same file to only have to exist one time. From a filesystem user perspective you’ll not notice this, however it can save a lot of space. The trigger “-o compression=gzip-9” sets the filesystem compression to use the gzip libraries and sets it to maximum compression, or 9. 1 is lowest, 9 is highest. These settings may impact performance on heavy writes to disk. You should consider how fast your disks are and how powerful your processor and ram are before enabling these settings. The trigger “-o quota=100g” sets the user’s home directory to grow no more than 100g. This setting is optional, but handy to know.

Move the files back:

mv /opt/tmp/* /export/home/newuser/; mv /opt/tmp/.* /export/home/newuser/

Verify the pool:

root@server1:/# zpool list

NAME      SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
pool1    3.97T   239K  3.97T     0%  1.00x  ONLINE  –
syspool   127G  1.31G   126G     1%  1.00x  ONLINE  –

root@server1:/# zfs list

NAME                     USED  AVAIL  REFER  MOUNTPOINT
pool1                    136K  3.91T    31K  /export/home
pool1/newuser             21K  3.91T    21K  /export/home/newuser
syspool                 2.34G   123G    26K  none
syspool/rootfs-nmu-000  1.31G   123G  1007M  legacy
syspool/swap            1.03G   124G    16K  –

Verify the share is up:

root@server1:/# sharemgr show -vp

default nfs=()
zfs
zfs/pool1/newuser smb=()
/export/home/newuser
pool1_newuser=/export/home/newuser     smb=(abe=”false” guestok=”false”)

Now you should be able to connect to that share from another machine (you’ll need to authenticate as the correct user, of course.)

smb://(ip address or host name)/export/home/newuser

Everything is up for Samba. Repeat this process to create users and shares as needed.

 

6 NFS Setup:

This one’s pretty easy. If you want an NFS server to share all home directories, do the following:

root@server1:~# zfs set sharenfs=on pool1

It’s that easy. If you want to share specific pools, like the one we created for newuser, it would be:

root@server1:~# zfs set sharenfs=on pool1/newuser

Verify the share is up:

root@server1:/# sharemgr show -vp

default nfs=()
zfs
zfs/pool1 nfs=()
/export/home

I should point out that the UID and GID of the client connecting should match, otherwise you won’t be able to connect/read/write. I am not going to go into how to set up a NIS master server here, as out of the box Nexenta doesn’t include the network/nis/server, network/nis/passwd, network/nis/update, network/nis/xfr services. These could be installed as packages from Sun/OpenSolaris but I haven’t looked into this yet. As long as the client you’re using (whether it’s OS X, Services For UNIX on Windows, or Linux) can either translate or match the UID/GID of the user on the server, you won’t run into connection problems. I’ll look more into this and write a new tutorial for creating a NIS master server and attaching Samba to an Active Directory Domain in the future.

  • Nexenta: http://www.nexenta.org/
  • ZFS Best Practices Guide: http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide

 

Comments

comments