Examples of working with Solaris zones
Create a zone
Option A: Sparse zone (piggy-back global Solaris devices and software)
$ mkdir /zones
$ cd /zones ; mkdir appsvr1 ; chmod 700 appsvr1
$ zonecfg -z appsvr1
zonecfg:appsvr1> create
zonecfg:appsvr1> set zonepath=/zones/appsvr1
zonecfg:appsvr1> set autoboot=true
zonecfg:appsvr1> add net
zonecfg:appsvr1:net> set physical=bge1
zonecfg:appsvr1:net> set address=192.168.1.101
zonecfg:appsvr1:net> end
zonecfg:appsvr1> verify
zonecfg:appsvr1> commit
zonecfg:appsvr1> exit
Note: use the "add fs" option for mounting additional filesystems (eg san or internal slice).
Note: use the "add inherit-pkg-dir" option to include a global directory in the sparse zone.
Option B: Whole-root zone (provides a writable /usr, /sbin, /lib, /platform). Use create -b, or remove specific inherited directories as follows,
zonecfg:appsvr1> remove inherit-pkg-dir dir=/usr
To change settings, simply shutdown the zone, select the module, apply the change, and then boot the zone:
$ zoneadm -z appsvr1 halt
$ zonecfg -z appsvr1
zonecfg:appsvr1> select net physical=igb0
zonecfg:appsvr1:net> info
net:
address: 10.18.40.102
physical: igb0
defrouter not specified
zonecfg:appsvr1:net> set address=10.18.40.58
zonecfg:appsvr1:net> end
zonecfg:appsvr1> commit
zonecfg:appsvr1> exit
$ zoneadm -z appsvr1 boot
Login to check the changes (see below).
Another example, changing the hostid in a zone,
$ zonecfg -z appsvr1 set hostid=1337833f $ zoneadm -z appsvr1 reboot $ zlogin appsvr1 "hostid"
To cap memory and use dedicated limited CPU resources (can also limit by shares),
$ zonecfg -z appsvr1 "add capped-memory; set physical=1024m; set swap=1g; end"
$ zonecfg -z appsvr1 "add dedicated-cpu; set ncpus=2-4; end"
$ zoneadm -z appsvr1 reboot
Delete a zone
$ zoneadm -z appsvr1 halt
$ zoneadm -z appsvr1 uninstall
$ zonecfg -z appsvr1 delete
Install, boot and login to the new zone
$ zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / native shared
- appsvr1 configured /zones/appsvr1 native shared
$ zoneadm -z appsvr1 install
$ zoneadm -z appsvr1 boot
$ ifconfig -a
$ ping 192.168.1.101
$ zoneadm list -cv
$ zlogin -C -e [ appsvr1
Refer to Oracle Supoprt Doc ID 1010051.1 if it fails.
Note: system identification will commence, enter language, locale, etc...you may also add users, enable SSH root login, and other customisations. Type ~. to quit from the zlogin shell. Test remote access with SSH. The zone status is now "running".
Note: system identification will commence, enter language, locale, etc...you may also add users, enable SSH root login, and other customisations. Type ~. to quit from the zlogin shell. Test remote access with SSH. The zone status is now "running".
Clone a zone on the same server
In the actions below you are cloning the original zone (appsrv1) onto the same host. Export the original zone config to file and update it for the new zone (hostname, IP address etc).
$ zonecfg -z appsvr1 export -f /tmp/appsvr2.zone.cfg
$ vi /tmp/appsvr2.zone.cfg
$ zonecfg -z appsvr2 -f /tmp/appsvr2.zone.cfg
$ zoneadm list -cv
$ zonecfg -z appsvr2 verify
$ zoneadm -z appsvr1 halt
$ zoneadm -z appsvr2 clone appsvr1
$ zoneadm -z appsvr1 boot
$ zoneadm -z appsvr2 boot
$ zlogin -C appsvr2
Make sure you can connect with SSH to each zone. Review status,
$ ifconfig -a
$ zoneadm list -cv
The new zone contains any users and files that existed in the original zone when it was cloned.
Clone a zone to another server
Ensure both hosts are the same Solaris revision and have the same patches, downgrade of patches can prevent successful cloning. An outage for the original zone is required here.
$ zoneadm -z appsvr1 halt
Ensure both hosts are the same Solaris revision and have the same patches, downgrade of patches can prevent successful cloning. An outage for the original zone is required here.
$ zoneadm -z appsvr1 halt
$ zoneadm -z appsvr1 detach
$ cd /zones
$ tar Ecf - appsvr1 | gzip --fast -c > /tmp/appsvr1.tar.gz
$ zonecfg -z appsvr1 export -f /tmp/appsrv1.cfg
$ zonecfg -z appsvr1 export -f /tmp/appsrv1.cfg
Copy the archive to the target host, then attach and boot the original, ending the outage.
Extract the tar archive to the target zone location, edit the config (new hostname, IP, NIC, etc):
Extract the tar archive to the target zone location, edit the config (new hostname, IP, NIC, etc):
$ cd /drzones
$ gzcat /tmp/appsvr1.tar.gz | tar xf -
$ vi /tmp/appsrv1.cfg
Create the zone,
$ zonecfg -z appsvr1 -f /tmp/appsrv1.cfg
$ zoneadm list -cv
$ gzcat /tmp/appsvr1.tar.gz | tar xf -
$ vi /tmp/appsrv1.cfg
Create the zone,
$ zoneadm list -cv
Attach the new zone, perform package updates for compatibility with the global zone.
$ zoneadm -z appsvr1 attah
$ zoneadm -z appsvr1 attach -u
$ zoneadm list -cv
$ zoneadm -z appsvr1 boot
Note: it is possible to force the "upgrade", and also possible to clone a live zone without downtime, however this is not the recommended approach, the new zone may not get vendor support.
$ zoneadm -z appsvr1 attach -u
$ zoneadm list -cv
$ zoneadm -z appsvr1 boot
Note: it is possible to force the "upgrade", and also possible to clone a live zone without downtime, however this is not the recommended approach, the new zone may not get vendor support.
Export disk devices to Zone
A raw device is exported as follows:
$ zonecfg -z appsvr1
> add device
> set match=/dev/rdsk/cXtXdXsX
> end
> verify
> commit
> exit
A block device is exported the same as above, except like this:
> set match=/dev/dsk/cXtXdXsX
Reboot the zone, add a FS with "newfs" if applicable:
$ zlogin appsvr1 shutdown -i 6
$ fstyp /dev/dsk/cXtXdXsX
$ mount /dev/dsk/cXtXdXsX /u05
To remove it from the zone, run this and reboot the zone afterward,
$ zonecfg -z appsvr1
> remove fs dir=/u05
> verify
> commit
> exit
Mount global zone filesystem in non-global
Example, provision 100G of disk, and then add it as /pub in a zone.
In the global zone, create the storage pool, unmount it, set mount property:
$ zpool create appsvr1_pub c0t3d0
$ umount /appsvr1_pub
$ zfs set mountpoint=/pub appsvr1_pub
Add dataset to zone config,
$ zonecfg -z appsvr1
> add dataset
dataset> set name=appsvr1_pub
dataset> end
> verify
> commit
> exit
Reboot zone, check for the new filesystem
$ zoneadm -z appsvr1 reboot
$ zlogin appsvr1 'df -h /pub'
$ zfs set mountpoint=/pub appsvr1_pub
Add dataset to zone config,
$ zonecfg -z appsvr1
> add dataset
dataset> set name=appsvr1_pub
dataset> end
> verify
> commit
> exit
Reboot zone, check for the new filesystem
$ zoneadm -z appsvr1 reboot
$ zlogin appsvr1 'df -h /pub'
No comments:
Post a Comment